Ad Image

AI As a Double-Edged Sword for OT/ICS Cybersecurity

AI As a Double-Edged Sword for OTICS Cybersecurity

AI As a Double-Edged Sword for OT/ICS Cybersecurity

Vicky Bruce, Global Capability Manager of Cybersecurity Services at Rockwell Automation, explains why AI can be a double-edged sword for OT/ICS cybersecurity. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

Artificial intelligence (AI) is quickly transforming how industrial organizations think about cybersecurity. On one hand, it helps security teams spot threats earlier, automate responses, and reduce downtime. On the other hand, it gives cyber attackers tools to launch more targeted, convincing, and damaging attacks—often in seconds.

Cybersecurity threats are evolving as fast as the technologies meant to stop them. For cybersecurity teams tasked with protecting operational technology (OT) and industrial control systems (ICS), this is both a leap forward and a growing risk. In the field, the same AI model that helps prevent downtime one day can trigger a false positive —or worse, be manipulated—on another. Security teams face the challenge of tapping into AI’s potential without introducing new vulnerabilities.

The Expanding Cyber Risk Landscape 

Industrial networks today look nothing like they did a decade ago. What were isolated, largely air-gapped, industrial networks are now interconnected ecosystems where OT and information technology (IT) converge. Meanwhile, cyber threats are growing in scale and complexity, and the convergence of IT and OT is increasing the attack surface. According to the SANS 2024 ICS/OT Cybersecurity Report, the cybersecurity risks in OT are growing, with 19 percent of organizations reporting one or more security incidents in just a year.

AI is accelerating progress on both sides of the cybersecurity equation. A recent survey of manufacturing leaders found that 49 percent plan to use AI and machine learning (ML) for cybersecurity in the next 12 months. But the same tools are also being used by threat actors to automate intrusions and evade detection. The challenge is to harness AI’s potential while keeping it from being weaponized against the systems it was designed to protect.

New Frontiers in Protecting Operational Technology

AI’s strength lies in its ability to process and act on vast amounts of data. When applied to industrial environments, it can recognize subtle changes before they become major disruptions or threats. Here’s where it’s making a difference:

Smarter Anomaly Detection  

Traditional threat detection tools look for known signatures, but many of today’s most damaging threats don’t come with a fingerprint. AI-driven threat detection systems can flag subtle behavioral anomalies, such as a robotic arm cycling 0.4 seconds too fast or a PLC issuing a command slightly out of sequence. Even an unusual pattern in equipment startup time can signal misconfiguration caused by a compromised vendor laptop.

Predictive Maintenance as a Security Layer  

AI-powered predictive maintenance can serve as an added layer of a strong cybersecurity strategy. A piece of equipment acting “off schedule” could be more than just wear and tear. It might be a symptom of malware or unauthorized configuration changes. Continuously monitoring maintenance data to flag irregularities can help teams identify potential failures before they happen.

AI-Assisted Network Segmentation  

When a breach happens, the difference between a minor incident and a catastrophic shutdown comes down to speed. Seconds can determine whether a threat jumps to another cell or stays isolated. In a food and beverage plant, this could mean stopping a ransomware attack before it locks down a batching system. Instead of waiting for IT teams to intervene manually, AI confirms threats are contained in real-time.

When Cyber Defenses Become a Target

Of course, the same technology deployed to defend operations can be weaponized. Attackers are using AI to design malware that adapts, evades, and even changes itself, rendering traditional security technology relying on fixed threat databases increasingly less useful. At the same time, AI-generated deepfakes make phishing attempts more realistic than ever. Take a manager on the plant floor receiving a voicemail from their “CEO” authorizing a key system modification, then later learning that it was entirely AI-generated.

Attackers are also testing how far they can manipulate AI systems directly. By feeding adversarial data into detection models, they can suppress alerts or train systems to ignore certain behaviors. Without proper validation, a security model might learn the wrong lessons from the wrong data.

Recent high-profile ransomware incidents reinforce how quickly tactics are evolving. For example, a ransomware attack disrupted operations for thousands of U.S. car dealerships and led to a reported $25 million ransom payment. This example demonstrates how threat actors are employing more advanced tactics to cripple businesses and shut down entire industries. These are no longer isolated events, but industry-shaping moments.

Best Practices for AI in OT Cybersecurity 

To safely deploy AI in critical infrastructure, organizations need more than just good intentions. They need good governance. This includes:

  • Implementing security frameworks. AI-driven security measures should follow industry best practices. By aligning with established frameworks like NIST 800-82 and IEC 62443, organizations can take a structured approach to safeguarding operational technology environments in the face of growing OT/IT convergence challenges.
  • Testing early and often. Without rigorous testing and validation, AI models can be tricked into ignoring or misclassifying real threats. Regular testing helps detect vulnerabilities and prevent adversarial manipulation. Organizations can also use AI to simulate intrusions, running AI-driven penetration tests to identify weaknesses before malicious actors can exploit them.
  • Embedding security from the start. Additionally, AI should be deployed using a “secure-by-design” approach, where security is embedded into AI systems from the outset rather than treated as an afterthought. The future of AI in cybersecurity isn’t just about a stronger posture—it’s about staying ahead of threat actors who are using the same methods.

Balancing AI Innovation and Risk

As OT/IT convergence continues to blur the lines between traditional IT networks and industrial systems, AI is reshaping industrial cybersecurity. However, it’s a double-edged sword. Used correctly, it can enhance threat detection, automate risk management, and keep OT environments safer than ever. If left unchecked, though, it can introduce new vulnerabilities and give threat actors even more powerful tools. Security leaders must keep their eyes open to this tension to confirm their organizations benefit from AI’s capabilities without becoming over-reliant or exposed to new forms of risk.

The secret is balance. Used wisely, AI is a strategic advantage. Industrial organizations can strengthen security by implementing AI responsibly, validating models, and staying ahead of emerging threats without sacrificing resilience. In today’s high-stakes cybersecurity landscape, that’s the kind of AI strategy that wins.


Share This

Related Posts

Follow Solutions Review