150+ Cybersecurity Predictions from Industry Experts for 2026
As part of this year’s Insight Jam LIVE event, the Solutions Review editors have compiled a list of predictions for 2026 from some of the most experienced professionals across the SIEM, Endpoint Security, Networking Monitoring, and broader cybersecurity marketplaces.
For Solutions Review’s annual Insight Jam LIVE event, we called for the industry’s best and brightest to share their SIEM, endpoint security, and cybersecurity predictions for 2026 and beyond. The experts featured represent some of the top solution providers, consultants, and thought-leaders with experience in these marketplaces. Each projection has been vetted for relevance and its ability to add business value.
Cybersecurity Predictions for 2026 and Beyond
Michael Adjei, Director, Systems Engineering at Illumio
AI APIs will become the next big attack surface.
“The rapid adoption of agentic AI will drive a surge in autonomous connections between agents, systems, and applications. This hyperconnectivity will amplify existing API sprawl, overwhelming security teams and creating blind spots across digital infrastructure. Without robust oversight, these unsupervised pathways will become prime targets for exploitation, turning what was barely a manageable API problem into a systemic vulnerability. The rush to implement agentic AI will also result in insufficient supervision of how agents interact with other systems. Organisations will struggle to understand what access agents have to their systems and whether they are interacting with customers and sensitive data in the right way.”
Cyber-criminals will target the service supply chain rather than software and hardware.
“In 2026, attackers will target service supply chains more aggressively than software or hardware supply chains. Past attacks on organisations with shared outsourced service providers have shown that third-party compromises are easier and more rewarding than attacking hardware or software vendors. These providers often have legitimate internal access to sensitive systems and data, yet operate under inconsistent security standards compared to the larger organisations they service. When companies outsource core services, they create single points of failure that attackers can exploit. Attackers recognise this and will adjust their tactics accordingly for maximum gain.”
Brittany Allen, Senior Trust and Safety Architect at Sift
Fraud networks will scale faster than defenses can keep up.
“Fraud-as-a-service marketplaces operate like tech startups, with support channels, subscription models, and service guarantees. In 2026, these networks will share intelligence and tactics faster than legitimate companies can coordinate defenses. A new exploit discovered on Monday gets packaged into a Telegram tutorial by Wednesday and deployed across thousands of accounts by Friday.
“The advantage isn’t being the most sophisticated, it’s being the fastest. While fraud teams schedule meetings to discuss emerging threats, fraudsters are already three attacks ahead. Defense strategies built on quarterly reviews and annual budgets can’t compete with adversaries who work in real-time.”
Sundhar Annamalai, Chief Strategy Officer at LevelBlue
Geopolitics, Macroeconomics, and the Future of Cybersecurity.
“Without speculating on specific geopolitical events, we fully expect to see continued growth in state-sponsored cyber threats across all regions. Every organization is a potential target, but government entities and critical infrastructure remain the most significant. These sectors have an imperative to harden their defenses and build resiliency, and the ability to meet their operational and economic requirements—such as data sovereignty—presents a real opportunity for cybersecurity providers.
“From a macroeconomic perspective, we are operating in an environment of persistent uncertainty. Concerns around equity valuation pullbacks and stubborn inflation are driving caution across decision-making. Organizations are being forced to operate on tight budgets at the same time their attack surfaces are expanding, and threats are becoming more sophisticated through AI. This creates a serious challenge—and significant risk exposure. It also creates opportunity for cybersecurity partners who can operate more efficiently at scale, bring innovation to everyday service delivery, and demonstrate clear value that advances their clients’ business objectives.”
AI as the Next Evolution – and Challenge – in Cyber Defense
“The cyber threat landscape is constantly evolving, and AI represents the next, and most challenging, phase of that evolution. We expect threat volume to accelerate further, paired with increased sophistication through more believable social engineering, deepfakes, and other AI-enabled techniques. With that challenge comes the promise of AI-driven security measures. The scale of future threats will make a purely human-led defense model increasingly untenable, driving the imperative for AI to integrate into—and ultimately lead—the detection and response approach.
“In the near term, organizations will face trade-offs about how much remediation authority they grant to AI agents versus maintaining human review of AI-generated recommendations. Over time, as perceptions of risk mature, AI-powered cyber platforms will increasingly be able to assess risk and negotiate enforcement through secure interfaces, reducing alert volume and allowing security teams to focus on the most critical challenges. A central question for the industry is how quickly organizations transition from AI-enabled to AI-led cyber defense. Given that AI talent is at a premium across industries, we expect most organizations to lean toward buying rather than building, driving an acceleration in acquisitions of AI technologies that strengthen broader cyber defense programs.”
Where Cybersecurity Investments Will Concentrate in 2026.
“The role of the CISO has never been more challenging. Attack surfaces continue to expand, threat volume and sophistication are increasing, and tool sprawl has become overwhelming. As a result, we expect organizations to prioritize investments that help them manage their technology stack more efficiently, protect their environments more effectively, and proactively build resilience. Key areas of spend will include consolidation of network security architectures into SASE and zero-trust frameworks, proactive capabilities such as threat hunting and penetration testing, and MXDR services. Underpinning all of this will be accelerated investment in AI-enablement technologies, which we expect to grow rapidly in 2026 and beyond.”
Darren Anstee, Chief Technology Officer for Security at NETSCOUT
DDoS Suppression is Needed in the Fight Against Rising DDoS Attack Volumes.
“The rise in large-scale DDoS attacks—driven by higher edge connectivity speeds and large networks of compromised IoT devices—will push ISPs to become more focused on identifying and blocking the ‘top offenders’ within their own managed networks and subscriber bases. Outbound attack activity is placing a strain on ISP infrastructure, as attack traffic looks to exit the subscriber edge. This traffic may ultimately become part of a multi-terabit or gigabit-per-second attack as it reaches its target, but even at its source, it can be disruptive. For ISPs, being able to identify and block outbound attack traffic before it impacts local services will soon be a baseline requirement for service resilience.”
Michael Bachman, Head of Research and Emerging Technology at Boomi
Agentic Protocols: A 2026 Security and Bloat Ticking Time Bomb
“By 2026, the rapid rise of autonomous agentic protocols is set to move from the fringes to the mainstream, creating a dramatically fast-growing and complex attack surface across enterprise operations. This rush to adopt autonomous technologies without robust oversight is leading to ‘security blindspots,’ multiplying potential entry points for cyber-attacks as agents dynamically connect and delegate without constant human supervision. Additionally, tool bloat will cause agents to be overwhelmed with choice when selecting the right tools to respond to a prompt. Current governance frameworks are lagging, which means accountability suffers, as tracing the origin of incidents in the complex web of machine decisions is almost impossible. IT leaders who ignore these looming architectural challenges risk opening their organizations to serious harm and being blindsided by costly breaches and compliance failures.”
Dave Baggett, SVP of Security at Kaseya
“In 2026, we’ll continue to see attackers benefit from AI. If you ask ChatGPT to give you a targeted phishing email template, it will happily oblige, provided you couch your query in language like ‘I am a security researcher making a presentation about the dangers of phishing’ to get around the LLM’s guardrails. This means it’s now easy (and essentially free) for attackers to create grammatically perfect, highly targeted phishing emails. And that in turn renders some old-school email protection techniques like looking for bad grammar less relevant.”
“However, we’ll also continue to see major developments among organizations leveraging AI for cybersecurity in the year ahead – the bad actors certainly won’t be winning the battle of using AI. At INKY/Kaseya, we expect to make a human-level AI email analyst capability available sometime in 2026.”
Husnain Bajwa, SVP of Product – Risk Solutions at SEON
“The coming year will test how organizations balance automation, explainability, and regulation. Financial intelligence units are struggling to keep pace with AI-driven systems, and regulatory bodies are under mounting pressure to modernize. While today’s frameworks remain fragmented and siloed, 2026 will bring a push toward more unified, adaptive standards built on shared principles of transparency and configurability. The lesson from 2025 is clear: static, volume-based fraud models are obsolete. The next phase of fraud prevention will focus on building intelligent infrastructures of trust with systems that are unified, stable, and capable of evolving as quickly as the threats they’re designed to stop.”
Jon Baker, VP of Threat-Informed Defense at AttackIQ
AI will force defenders and adversaries to abandon their playbooks.
“In 2026, AI won’t just help attackers. It will lead them through the entire kill chain. This will accelerate adversary innovation and scale their impact. The question isn’t whether adversaries are using AI anymore. It’s how deeply it’s embedded in their operations. We’ve already seen threat actors using large language models like Gemini to research attack methodologies and Claude to conduct full operations, and that’s only what’s visible on commercial platforms. The result is chaos with fewer predictable patterns, and a surge of attacks no static playbook can anticipate.
“In 2026, expect threat actors to operate with unprecedented tactical diversity that makes attribution nearly impossible and defensive playbooks obsolete.”
Organizations will finally commit to systematic security maturity over quick fixes.
“In 2026, Continuous Threat Exposure Management (CTEM) matures from a buzzword into a discipline that reshapes how organizations defend themselves.
“Security teams are leaving behind the reactive rhythm of point-in-time assessments and chasing an ever-growing backlog of vulnerabilities to proactively manage validated exposures as a continuous practice. Annual testing can’t keep pace with AI-driven adversaries who change tactics weekly. Vulnerability management has been completely overwhelmed for years, and the pace of new vulnerability discovery and mean time to exploit is only making things worse. Furthermore, defenders are under pressure to demonstrate that cybersecurity investments are actually driving down risk.
“For too long, testing and assessments have been treated as compliance snapshots instead of progress trackers. The teams that thrive in 2026 will take a different path: executing specific adversary TTPs against their own environments, measuring how well their defenses respond, and systematically improving with every iteration. This is what real maturity looks like. It’s not a tool or a framework, but a living, learning, threat-informed system that drives meaningful improvement in cyber defense.”
Jan Bee, CISO at TeamViewer
Third-party SaaS supply chains will become the primary target for attacks.
“The interconnected world of SaaS applications will emerge as the most significant vulnerability for enterprises in 2026. As companies continue moving away from on-premise infrastructure to cloud-based solutions, threat actors are shifting their focus from traditional infrastructure to third-party and even fourth-party supplier risks. The days of isolated legacy systems are ending, and with them, the old playbook for enterprise security. What makes this particularly concerning now is that adversaries are leveraging AI to accelerate their ability to identify and exploit vulnerabilities across these complex supplier networks–turning what were once time-consuming surveillance efforts into automated processes.
“CISOs must prioritize speed in securing their supplier ecosystem. The challenge isn’t just identifying which applications are in use across departments–it’s understanding them quickly enough to secure them before adversaries exploit the gaps. Start by getting the foundational security posture right for each application, rather than attempting comprehensive security programs that take months or quarters to implement. The key is velocity: secure the primary tools first, then move systematically through the supplier list.”
Leonid Belkind, CTO and Co-Founder of Torq
“The most profound trend shaping 2026 will be the widespread adoption of generative AI and the implications of ‘shadow AI ‘ within organizations. This trend is already widespread, moving beyond early adoption and creating significant risks for organizations. The core challenge lies not just in the technology itself, but in the pressures it creates for cybersecurity leaders. Generative and agentic AI have been embraced for their potential to revolutionize workflows and conduct autonomous decision-making, yet they also create significant risks regarding data privacy if proper guardrails aren’t in place.
“On the other hand, those that block generative AI adoption face mounting pressure from business units demanding AI integration to remain competitive. These organizations also inadvertently create additional risks as employees will still seek the support of AI tools, whether their use is sanctioned by their organization or not. This duality highlights the need for a balanced approach, one that enables innovation while implementing robust, adaptive security measures to mitigate risks.”
Savinay Berry, Executive Vice President and Chief Product and Technology Officer at OpenText
A major brand fallout will force AI accountability.
“In the next year, we’ll likely see a major brand face real damage from AI misuse. It won’t be a cyberattack in the traditional sense but something more subtle, like a plain-text prompt injection that manipulates a model into acting against intent. These attacks can force hallucinations, expose proprietary or sensitive information, or break customer trust in seconds. Enterprises will need to verify AI behavior the same way they secure their networks, by checking every input and output. The companies that build AI systems with accountability and transparency at the core will be those that keep their reputations intact.”
David Bianco, Cybersecurity Researcher at Splunk and Cisco Foundation AI
Rethinking Cybersecurity Workflows for AI Integration
“In the coming year, organizations will shift from simply adapting AI to fit existing, human-driven workflows to fundamentally rethinking those workflows so they aren’t entirely dependent on manual processes. While humans will continue to guide strategy and oversight, reworking workflows with AI in mind will unlock greater efficiency and value. By designing workflows that leverage AI’s strengths, we can move beyond incremental improvements and fully realize AI’s potential in cybersecurity operations.”
Intersection of AI and Security Operations
“In the coming year, the intersection of AI and threat hunting will see a surge in interest around fully autonomous SOCs, but early adopters may quickly realize the limits of fully automated security operations when facing human adversaries skilled in deception. While machine learning excels at predictable, mathematical problems—like predicting hardware failures—cybersecurity remains a domain where human judgment is crucial for detecting and responding to sophisticated, deceptive attacks. For effective defense, vendors and solution providers must prioritize keeping humans in the loop, leveraging AI as a powerful tool for analysts rather than a complete replacement for human oversight.”
Stephen Boyer, Co-Founder and Chief Innovation Officer at Bitsight
The “AI Attack Surface” Will Trigger an Internet-Wide Vulnerability Crisis
“The frenzied rush to adopt AI will inadvertently create a massive, newly exposed attack surface. New, immature protocols designed for easy AI-to-system connection – like Model Context Protocol (MCP) – are being deployed without many of the foundational security controls. This will lead to the rapid discovery of widespread security exposures where connections to enterprise databases and administrative systems are left open to abuse. These vulnerabilities, along with automated attack orchestration, will accelerate exploitation and force many organizations to relearn many of the hard lessons of basic cyber hygiene.”
Regulatory Compliance Will Fully Converge with Business Resilience
“In 2026, the European Union’s Digital Operational Resilience Act (DORA) will hit its stride, exerting regulatory pressure on financial entities and, critically, their third-party suppliers, to move beyond rudimentary compliance exercises. DORA’s influence will ripple globally, establishing a new operational standard where companies must demonstrate measurable, continuous resilience against major outages and cyber shocks. This means that third-party risk management will cease to be a one-time onboarding formality and will become an integral, continuously audited component of business continuity planning.”
CISO Prioritization Hinges on Intelligent Automation as Vulnerability Volume Becomes Untenable
“The volume of newly disclosed vulnerabilities (projected to exceed 50,000 annually) is turning the CISO’s job into a constant fire drill. With flat or shrinking security budgets, security teams can no longer address every exposure impacting them or their extended supply chain. As a result, 2026 will mark the pivotal point at which security operations increasingly adopt intelligent, risk-prioritized automation. This automation, powered by continuous cyber risk intelligence, will be the principal approach for CISOs to manage the overwhelming exposure. Intelligent prioritization and risk-based resource allocation will allow CISOs to identify and act on the vulnerabilities and third parties being actively exploited by threat actors.”
Operational Technology (OT) Exploitation Will Bring Physical Consequences to the Forefront
“Operational Technology (OT) and critical infrastructure will move from an under-the-radar concern to a high-impact threat. Nation-state and criminal groups will aggressively exploit the growing number of significant security maturity gaps that still exist between IT and OT systems. The targeting of systems like building management and industrial controls – evidenced by threat actors like Volt Typhoon – will lead to several minor-to-moderate, localized disruptions of essential services that will elevate public awareness of OT risks.”
Mike Britton, CIO at Abnormal AI
SaaS Supply Chains Become the New Soft Target
“If I’m an attacker, I go for what’s easy and pays off big–in 2026, that will be SaaS. Everyone’s living in the cloud and connected through third-party integrations. An attacker now just needs to hit one small vendor that’s connected to a thousand other environments to create a massive return on investment at a relatively low risk.
“Supply-chain-style attacks like Salesforce will become more common in 2026, especially because many SaaS providers still treat security as a premium feature. You shouldn’t have to pay extra for MFA or audit logs, but a lot of companies do. That’s creating weak spots everywhere.
“Until vendors start making core security features standard, customers will continue to pay the price when those integrations are breached. The ecosystem’s too big now for security to be an optional add-on.”
Stop Chasing AGI—Start Fixing What’s Broken
“Everyone’s obsessed with the future of AI, whether it’s the idea of AGI or what the next big model will be. But the reality is, we’ve got enough problems right now. Next year, we need to worry less about a theoretical AI apocalypse and remain focused on the basics, such as getting multi-factor authentication adoption right.
“In 2026, I think we’ll see organisations start shifting their focus back to the practical. Things like prompt injection, data exposure through careless LLM use, and shadow AI projects inside businesses. These are the real risks.
“There’s a lot of noise about regulation and long-term ethics, but for most companies, the challenge is much simpler. To understand what you’re using, what data you’re feeding into these models, and how to keep it secure. We don’t need to fear what’s coming next; we need to get smarter about what’s already here.”
Brian Carbaugh, CEO and Co-Founder at Andesite
AI’s Effects on the Cybersecurity Workforce
“Integrating AI into security operations often leads people to think that AI is automating workflows and mostly, if not fully, replacing human security professionals. While automation is beneficial for certain aspects of an organization’s security infrastructure, total automation should not be the primary goal for bolstering security in the AI era. Companies should instead prioritize transforming and accelerating the accuracy of human cyber work by leveraging AI technologies. The AI tools enhance the work of cybersecurity professionals by empowering them to more easily make informed and accurate decisions quickly.”
CISO Burnout
“CISOs are under immense pressure across industries. Despite ever-increasing investment into cybersecurity tools to bolster an organization’s security posture, threats are growing faster in number and sophistication than organizations can keep up with. This dilemma has led to exhausted security teams who are burned out and fed up. CISOs are not only responsible for the security of their organization, but also for the well-being of their team members. Fatigue from a breakneck pace is not just detrimental to individuals but also increases security risks for an organization. When security teams are constantly scrambling in a weary state, it’s easier for a threat to slip through the cracks.
“Solving the burnout problem in cybersecurity is paramount. Far too often, CISOs resort to adding more tools to their already tool-heavy arsenals. While well-intentioned, this often complicates rather than facilitates investigations, slowing down the process instead of accelerating time to detect and respond. CISOs must leverage advanced technologies that deliver better outcomes and make their team’s work – and lives – better.”
Skills Gap in Cybersecurity
“The cybersecurity industry continues to struggle with a talent deficit. There are not enough cybersecurity professionals to fill open positions, and companies often struggle to find candidates with the specialized skills required in the AI era. Fortunately, AI-driven cybersecurity tools are transforming how security operations teams function by enabling Tier 1 and Tier 2 analysts to perform at a Tier 3 level. While Tier 1 and 2 professionals usually focus on monitoring, triage, and incident response under the guidance of senior analysts, AI tools now automate threat detection, correlation, and prioritization, allowing less experienced analysts to quickly identify complex attack patterns that once required advanced expertise.
“These tools handle the grunt work of analyzing thousands of alerts, enabling cybersecurity teams to make informed and precise decisions quickly while operating at levels beyond their official training.”
Changing External Attack Surface Management Response
“The attack surface continues to expand as new apps, cloud services, identities, and data flows emerge, often appearing on the scene quicker than security teams can catalog them. Traditional EASM often produces more lists than clarity. What security teams need is a way to see what matters and act quickly. AI can help here, as long as it is grounded in context and human oversight. By bringing AI to existing data and tools, rather than migrating everything to another platform, teams can surface the most critical exposures and investigate them in one place.
“EASM must move from static discovery to active decision support. Human-AI collaboration can help analysts understand where risk is changing in real-time and respond before exposure turns into impact. The priority is turning surface awareness into practical, repeatable defense.”
Nick Carroll, Cyber Incident Response Manager at Nightwing
“The threat landscape is undergoing an intense shift, driven by the widespread adoption of artificial intelligence by our adversaries. Nation-state campaigns orchestrated by agentic AI systems are moving at machine speed, shrinking dwell time, and overwhelming manual responses. Organizations will have no choice but to address the automation imperative, shifting to intelligence-driven detection, automated triage, and rapid AI-enabled threat hunting to keep pace.
“This upcoming year will test defenders on two fronts: the immediate challenge of AI-driven automation and the long-tail risk of quantum disruption. Together, they define a year where preparation must outpace innovation. The quantum threat is no longer theoretical, and ‘Harvest Now, Decrypt Later’ activity makes clear that the data most at risk is the data being stolen today. With NIST’s post-quantum standards finalized, 2026 must also be the year leaders move from awareness to action, establishing quantum-readiness teams, building cryptographic inventories, prioritizing long-lived sensitive data, and piloting hybrid classical-PQC deployments.”
Liav Caspi, Co-Founder and CTO at Legit Security
2026 will be the year of the AI-coded breach.
“I believe 2026 will be the year of the AI-coded breaches, where some companies will pay the price for shipping a product without fully knowing what’s inside it. Data shows consumers are also unaware of AI’s scale in software, with 78 percent believing the apps they use every day are still mostly built by hand. But 1 in 4 say they’d stop using a favorite app if an AI-related vulnerability caused a breach. It’s a fragile position for the industry to be in. If a single AI-coded breach shakes consumer trust this much, the fallout will be fast. This will fundamentally change how companies build and how consumers decide what to use.”
In 2026, AI Security will become its own discipline.
“In 2026, security will split from the old AppSec model and become its own specialty. The way software gets built has changed. It’s faster, more automated, and more complex than ever. Securing it will take new skills, new tools, and new ownership across organizations. The industry can keep applying human-era security to AI-era development or reinvent AppSec entirely. Just as DevOps reshaped IT a decade ago, AI security will reshape how the next generation of software gets built.”
Tim Chase, Field CISO and Principal Technical Evangelist at Orca Security
“The software supply chain will become the primary attack plan, not a side concern: By 2026, attackers will target source code and its open-source components more than any other asset. The new objective isn’t to exploit endpoints but to compromise the software supply chain itself, embedding malicious code where applications are created and deployed. With AI making it easier to replicate exploit patterns and automate code-level probing, we can expect to see more attempts to compromise package managers, CI/CD pipelines, and cloud-hosted source repositories. Most organizations continue to treat this as an auditing problem rather than a security architecture issue. The ones that move now to lock down developer access, enforce dependency trust policies, and continuously verify code integrity will be the ones that avoid being blindsided.”
Anthony Cusimano, Chief Evangelist and Director of Solutions Marketing at Object First
Education and healthcare will face the highest volume of cyber-attacks in 2026.
“In both education and healthcare, one of the greatest cybersecurity vulnerabilities lies in the challenge of integrating legacy systems with modern digital infrastructure. These sectors often operate on a patchwork of technologies, such as mainframes for patient records or student information systems, SaaS platforms for scheduling or learning management, and custom-built tools for diagnostics or administrative tasks that rarely interoperate. This lack of integration creates security silos, inconsistent authentication and logging, and fragmented backup protocols, all of which increase the attack surface.
“Compounding the issue, many institutions still rely on outdated tape backups or under-tested cloud appliances, leading to slow recovery times and compliance risks. As these sectors modernize, the inability to securely bridge old and new systems without introducing complexity or gaps in protection will come to a head in 2026, creating a major cybersecurity concern that bad actors will undoubtedly exploit.”
AI will dominate the conversation at security trade shows in 2026.
“At RSA and similar events in 2026, expect a surge in solutions focused on AI threat detection, data poisoning mitigation, and agentic AI containment. Vendors will showcase tools that go beyond traditional perimeter defenses, highlighting self-healing systems, real-time deepfake detection, and AI-powered attack surface management. There will most likely also be a strong emphasis on resilience technologies like immutable storage, as organizations seek assurance that they can recover from attacks that evade detection. I’m expecting more conversations to center around the AI arms race, with a growing consensus that AI and ransomware resilience is a must.”
Floris Dankaart, Lead Product Manager at NCC Group
On AI
“2025 marked the first large-scale AI-orchestrated cyber espionage campaign, where Anthropic’s Claude was used to infiltrate global targets. Earlier in the year, it was already apparent that tools that can be used for such a campaign were being developed (for example, ‘Villager’). This trend will continue in 2026, and AI’s use as a sword will be followed by an increase in AI’s use as a shield.”
On Ransomware
“In October 2025, Jaguar Land Rover suffered a ransomware attack that forced a global production halt, disrupting supply chains and causing significant operational downtime. This incident exemplifies how ransomware now targets manufacturing environments where IT and OT are deeply interconnected. Attackers combined encryption with data theft and public extortion tactics, pressuring the company to pay while production lines remained idle. The event highlighted the vulnerability of industrial networks and the cascading impact on suppliers and logistics.
“In 2026, this trend will continue, targeting ICS controllers and safety systems to maximize operational and reputational damage. Expect campaigns to leverage AI for adaptive payloads and lateral movement across industrial networks. For defenders, OT (micro) segmentation, anomaly detection for industrial protocols, and offline recovery plans will become non-negotiable as ransomware shifts from data hostage to operational sabotage.”
Doron Davidson, Managing Director of Global Security Operations and Delivery at CyberProof
Quantum computing will not revolutionize 2026.
“Don’t expect quantum to revolutionize 2026. I expect the real impact will emerge in the next few years, between 2027 and 2029. We’ll see an increase in long-term attacks due to quantum, meaning that attacks happening today will continue to focus on long-term data theft, with adversaries stealing information now in anticipation of future quantum capabilities. They may exfiltrate data in 2026 and only decrypt it years later, once quantum tools become practical. This means stolen information might be exploited long after logs have been deleted, making it far more difficult to trace who stole the data or when it was accessed.”
Shadow IT and rogue applications will continue to increase in an AI-driven world.
“In 2026, we’re going to see an increase in Shadow IT and rogue applications within the AI world, whether it’s users deploying different systems from their phones or elsewhere. As AI tools become more accessible on personal devices, it’s important to remember that many individuals store critical information about their organizations on the same devices. Heading into 2026, there’s a growing risk that data will be shared with unapproved AI systems–which is sure to cause issues, including breach of trust.”
Paul Davis, Field CISO at JFrog
“In 2026, Zero Trust will remain a cornerstone of security, but its implementation will become significantly more complicated, adding not a replacement, but an additional burden for CISOs and security teams. The rapid adoption of agentic AI and non-human identities is reshaping the security landscape, introducing unprecedented complexity to access management and threat detection. In fact, machine identities outnumber human identities by a factor of 45 to 1 on average, and in large organizations, non-human identities outnumber human users by 50 to 1. What’s more, these intelligent agents often bypass traditional silos, making it increasingly difficult to enforce granular permissions and isolate access.
“As we move into the new year and beyond, developers and security leaders must contend with environments where access is not just about human credentials, but also about controlling intelligent agents whose permissions are far less transparent. Security leaders must rethink how they verify and monitor every interaction, moving beyond legacy controls to embrace continuous authentication and real-time oversight. This includes embedding security into every phase of the software development lifecycle, leveraging continuous authentication, real-time monitoring, and automated threat detection to address risks that are no longer confined to just human users.
“As these new threats evolve, security leaders must shift to a compliance as code approach in 2026, bringing compliance standards to the business level. This means having the tools and visibility needed to determine and demonstrate whether applications are trustworthy, confirm they meet required criteria, and being able to validate every component within their environment.”
Trevor Dearing, Director of Critical Infrastructure Solutions at Illumio
Why Legislation Alone Won’t Save Us in 2026
“While regulations and compliance frameworks are essential, there’s a limit to what legal mandates can achieve. There’s a mistaken belief that having more laws or guidelines in place will automatically make organizations safer. However, the real issue is that legislation often establishes only a minimum threshold for cybersecurity – and people treat it as such. What we consistently see is that effective resilience depends on much more than simply ticking boxes or passing audits.
“In 2026, organizations and governments must focus on strengthening critical infrastructure from the inside out and supporting entire communities when disruptions occur. Recent responses to incidents showed that cyber events can ripple far beyond a single company, affecting local businesses and supply chains. In the coming year, building resilience will be about more than achieving compliance – it will be about the practical ability to keep services running for society.”
Tight Margins, High Risks: Why Utilities and Retail will be Prime Targets for Cyberattacks in 2026
“Any sector operating on tight profit margins – such as utilities, retail, and transportation – will be at greater risk of attacks in 2026 because they can’t afford massive spending on cybersecurity. Unlike banks, which can invest significantly in cybersecurity, sectors such as energy and utilities, food retailers, or railway companies must prioritize keeping prices low for consumers, making large cybersecurity investments less likely and more challenging. As a result, these organizations are more cautious with their cybersecurity budgets, often delaying projects or investing only in the bare minimum required to maintain operations. This leaves them highly vulnerable to attacks, as their ability to keep pace with evolving cyber threats lags behind that of better-resourced sectors.
“In the coming year, we can expect attackers to increasingly target industries such as utilities and retail, exploiting their limited cybersecurity budgets and reliance on outdated systems, which will lead to more frequent and disruptive incidents.”
Randall Degges, VP of Developer Relations & AI Engineering at Snyk
The AI Supply Chain Will Outgrow and Break Every Traditional Security Model We Depend On
“The software supply chain isn’t evolving—it’s dissolving. Developers are now generating meaningful portions of application logic directly through AI agents, rather than relying on open-source libraries. AI-generated code fragments lack maintainers, version histories, CVEs, and patch cycles—they are effectively untracked logic.”
Shadow AI Will Become the Biggest Internal Breach Risk & Most Companies Won’t Even See It Happening.
“The most overlooked trend heading into 2026 is the rapid rise of employee-created AI workflows happening entirely outside corporate oversight. This is shadow IT on steroids: decentralized, opaque, and nearly impossible to audit. Sensitive data is being processed and stored in unapproved AI products with no logging, no access control, and no compliance safeguards.”
Avani Desai, CEO at Schellman
Fragmented Rules, Shared Principles
“We are seeing a clear shift toward more prescriptive rules around transparency and accountability, but the path will not be perfectly harmonized. California’s S.B. 53 is setting early expectations in the U.S. for high-compute AI models, requiring disclosures around safety, security, and misuse. In Europe, the EU AI Act and Corporate Sustainability Due Diligence Directive are creating parallel accountability frameworks, one focused on responsible AI, the other on human rights and environmental impact.
“While this could lead to a patchwork of regional rules, the underlying principles are converging. Documentation, explainability, human oversight, and due diligence are emerging as global norms. Over time, global enterprises will build to the highest common denominator, then use independent assurance to demonstrate compliance across jurisdictions consistently and credibly.”
Andrea Schulze Dias, VP and CIO at Toshiba Americas
“In 2026, CIOs and IT leaders should shift their focus to treat quantum risk as a now problem, rather than a futuristic problem. Quantum computing development has advanced enough that every CIO should be actively preparing for the day our current encryption breaks, also known to many in the industry as Q-Day. If your organization isn’t already quantum-proofing its infrastructure, you’re already behind the curve.
“QKD is needed as a defense-in-depth component to solve the cryptography problem. That’s why QKD is essential for strong cryptography because it cannot be broken, even with unlimited computing power. Hackers are already saving encrypted data to decrypt later. This makes it urgent for organizations to start using both post-quantum cryptography (PQC) and QKD to stay secure. Additionally, CIOs will need to prepare to lead three major changes:
- Zero trust becomes a non-negotiable: Implicit trust is a liability in a quantum-threat world. Continuous verification will become a baseline rather than a “nice-to-have.”
- AI takes over threat detection: As cybersecurity attacks become more sophisticated, AI will be a resourceful tool to spot advanced threats at scale. Expect heavier investment in AI-driven anomaly detection, automated responses, and predictive modeling.
- Launch a hybrid post-quantum strategy: Quantum threats demand immediate action. While post-quantum cryptography (PQC) standards are emerging, mitigating enterprise systems to quantum-safe encryption is a complex process that can take years. By starting now and deploying a hybrid approach that combines PQC and QKD, you position your organization to stay ahead. CIOs must map the location of encryption, identify systems that are ready for upgrades, and uncover technical debt that could hinder adoption.
“Quantum resilience isn’t just a tech refresh; it’s an architectural shift. It impacts governance, vendor choices, data strategy, and long-term continuity planning. Quantum computing is coming faster than most teams expect. The organizations that have a plan that’s ready to activate in 2026 will be the ones still standing when Q-Day arrives.”
Bill Dunnion, Chief Information Security Officer at Mitel
Offensive Security Becomes the Standard for Defense
“The future of cybersecurity lies in thinking like the adversary. Traditional defensive postures—such as firewalls, monitoring, and compliance checklists—are no longer sufficient against threats that move faster and learn continuously. Offensive security practices such as red teaming, threat hunting, and penetration testing will evolve from optional exercises to essential functions of risk management. The guiding principle is simple: what you don’t know can hurt you. Proactively testing systems exposes blind spots before attackers do. The next generation of programs will combine structured frameworks, such as NIST and ISO, with continuous offensive assessments to create dynamic, adaptive defense ecosystems.
“Mature organizations will recognize that compliance does not equal security. Instead, they will integrate continuous testing into their operations, utilizing real-world attack simulations to enhance defenses and quantify risk in business terms. The result is smarter, faster decision-making that results in better protection.”
Security Will Become a Shared Language Across the C-Suite
“The effectiveness of a security program increasingly depends on communication between the CISO and CIO. Security can no longer be confined to technical teams; it must be expressed in business terms. CISOs who frame priorities in terms of risk, value, and impact will gain greater alignment and influence across the enterprise. In an era where digital transformation touches every process, security must operate as a shared language, not a siloed discipline.
“The evolution of security strategy depends on the ability to translate technical risks into actionable insights for executives. Successful organizations must embed security into planning, budgeting, and innovation as a foundation of growth and trust.”
Laura Ellis, VP of Data and AI at Rapid7
From Tool to Teammate: The Rise of the AI SOC
“In 2026, AI within the SOC will mature from a helpful tool into an active collaborator. It will operate as a trusted teammate, managing triage, enrichment, and correlation across massive streams of telemetry. SOC analysts will work alongside AI systems, training and governing them as part of daily operations. The relationship between analyst and model will form a continuous feedback loop where human expertise refines AI performance, and AI accelerates human insight.
“SOC analysts will validate AI logic with the same rigor they apply to system access and identity control. The most effective AI SOCs will not compromise on explainability, ensuring every decision can be traced to clear evidence and transparent reasoning. Accuracy will remain essential, but explainability will define trust. Models that cannot show their logic may lose their license to act.
“Human oversight will be built into SOC operations. AI SOCs will establish clear points where analysts review, confirm, or override automated decisions. Performance will be measured by how efficiently the SOC operates and how confidently humans can trust the reasoning behind each AI-driven action.”
Keatron Evans, VP of Portfolio Product and AI Strategy at Infosec
“Cybersecurity is seeing a fundamental shift in how professionals at every level engage with artificial intelligence (AI). In 2026, we’ll move far beyond using AI as a supplemental tool for occasional queries. Entry-level roles that we’ve traditionally positioned as foundational — such as SOC analysts, security administrators, and even help desk positions with security responsibilities—will rapidly evolve into hybrid, AI-powered roles. Newcomers won’t just need to understand how to craft good prompts. They need to grasp the core architecture of how these systems actually operate. This includes the progression from machine learning to traditional AI, to generative AI, to AI workflows, and, finally, to AI agents, which are already gaining serious traction heading into next year.
“But there’s a darker side driving this urgent need for AI upskilling. Malicious actors aren’t waiting for the industry to catch up. They’re already using AI to accelerate the sophistication of their attacks. Deepfakes, in particular, are moving from proof-of-concept curiosities to front-and-center threats. We’ve already seen AI-generated voices and video used in social engineering attacks that resulted in significant financial losses. In 2026, deepfake-driven fraud is expected to surge as the technology becomes increasingly accessible and convincing. Organizations are not prepared for this new threat landscape, and the awareness gap around AI-powered attacks is dangerously wide.
“The bottom line? 2026 is the year when ‘basic AI literacy’ transforms from a nice-to-have into a baseline requirement. Security professionals who don’t develop deeper AI skills will find themselves outpaced by threats that evolve at machine speed. That means going beyond using AI tools to understanding how they work, how to automate with them, and how adversaries weaponize them. The good news is that the hiring landscape is evolving to match this reality. AI-powered skills verification is finally moving beyond credentials to assess what really matters: can someone actually do the job? This creates a more direct path from demonstrable capability to employment, ensuring organizations get the talent they need while opening doors for professionals who can prove their skills, regardless of their background.”
Michael Fanning, CISO at Splunk
“The skills gap continues to widen—and it’s not static but evolving. Traditional pipelines can’t keep up with how fast security and technology are changing. That means CISOs need to look beyond credentials and focus on potential. Some of the best hires I’ve seen didn’t come from a cybersecurity background, but with curiosity, adaptability, and problem-solving skills combined with a fundamental understanding of computing, systems, and evolving technology. We can teach security concepts, but mindset and collaboration can’t be trained as easily. Organizations that invest in developing talent internally by pairing experienced mentors with high-potential learners and cross-training them across different areas of security and technology are the ones that will win in the long term.
“The pressure on CISOs and their teams is real—the stakes are high, and the pace is constant. We continually balance business demands for innovation and product delivery with the reality of defending against sophisticated, complex, and unrelenting threats. That kind of stress can take a toll if not managed intentionally. The CISO can’t be the organization’s psychologist, but can set the tone by normalizing balance, encouraging open dialogue, and clarifying that rewarding heroics isn’t a badge of honor, as rewarding them can be an antipattern that needs addressing. Protecting the team’s mental health starts with leadership modeling it. The CISO needs a trusted peer network, board and CEO support, and permission to disconnect–setting the example that taking time for family, friends, or oneself is not just OK but essential, because you can’t secure the business if you’re running on empty.”
Ross Filipek, CISO of Corsica Technologies
Autonomous AI attacks will create a speed gap that human defenders can’t close.
“In 2026, the biggest disruption in cybersecurity won’t be a new exploit. It will be a widening speed gap between attackers and defenders. Agentic AI will behave less like a tool and more like a swarm, scanning for misconfigurations, chaining vulnerabilities, shifting laterally, and launching payloads in seconds. These autonomous systems will hit critical infrastructure the hardest, exploiting the growing overlap of IT and OT environments that were never designed to withstand machine-speed attacks.
“This new reality will expose an uncomfortable truth: by the time a human analyst identifies a breach, an autonomous attacker may have already completed its objectives and compromised sensitive business information. Organizations will need defenses that can operate at the same velocity—continuous validation, automated containment, and AI-driven detection that reacts before attackers finish their sequence. The winners won’t be the teams with the largest SOCs, but the ones willing to let automation take the first move.”
John Fokker, Vice President of Threat Intelligence Strategy at Trellix
“As threat actors accelerate their adoption of AI, we’re approaching a structural inflection point in how cyber-crime is organized and executed. Historically, successful attacks have required a web of specialized human-run services, including reconnaissance, exploit development, phishing copywriters, botnets, and ransomware affiliates, each provided by different actors and stitched together through human collaboration and marketplaces.
“Over the last two years, those individual services have increasingly internalized AI capabilities. The next step is predictable: agentic AI orchestrators will begin to chain AI-enabled services end-to-end, effectively automating the entire kill chain. When an agentic system can discover an exploit, craft and personalize a delivery, bypass detection using adaptive evasion techniques, and then escalate and propagate across a network without human handoffs, the cadence and scale of attacks change fundamentally. We should expect faster, more fluid campaigns that blend precision targeting with automated operational tradecraft, which means defenders must move beyond siloed controls and invest in integrated detection and AI-assisted responses that can spot orchestration behaviors rather than just individual tactics.”
Carl Froggett, Chief Information Officer at Deep Instinct
Atlas, NPUs, and the Rise of On-Device Zero-Day Malware
“The convergence of agentic browsers, like OpenAI’s Atlas, and new neural processing units (NPUs) embedded in modern chips (Apple, AMD, Intel, Qualcomm, etc.) is creating a dangerous shift in the cyber threat landscape. These processors enable local large language models (LLMs) and AI assistants to run efficiently on laptops and mobile devices, allowing attackers to generate and execute weaponized code entirely on the endpoint.
“In Atlas’s case, that could mean a malicious extension or plugin instructing the browser agent to harvest credentials or modify data in-session, without ever touching a command-and-control server. With a single malicious prompt or payload, a threat actor can instruct a local AI model to assemble and deploy malware in real-time, collapsing the traditional kill chain – reconnaissance, weaponization, delivery, exploitation, command and control, and actions on objective. Once the threat actor is inside, perimeter defenses lose their value entirely.
“Traditional defense-in-depth architectures were never designed for this level of local autonomy. Theoretically, attackers could have attempted something similar on conventional CPUs, but the power draw and latency would have made it obvious. NPUs eliminate that friction, making on-device malware creation fast, quiet, and power efficient. Antivirus and network monitoring tools usually overlook the plain-text prompts that trigger malicious code generation, and signature-based detection cannot keep pace with zero-days created locally on demand. The only sustainable path forward is prevention: blocking malicious behavior at delivery, hardening model access on endpoints, and extending identity and data-level controls so AI agents cannot act as invisible insiders. Agentic browsers make this challenge especially urgent. Without a proactive, predictive defense strategy, these Atlas-like browsers and the rise of NPU-enabled devices will mark the beginning of a new era of scalable, stealthy attacks with nation-state sophistication.”
Scott Fulton, Chief Product & Technology Officer (CPTO) BlueCat
Organizations will ditch fragmented approaches and strive for unified visibility.
“Most IT environments have several observability tools, each focused on a specific view of the network. The result leaves network operations teams with an incomplete picture of the network. Teams want a single, clear view of how systems, networks, and user experiences are connected. To get there, teams need to improve data quality, streamline dashboards and reports, and embrace AI-driven automation. This trend isn’t only driven by tool complexity. Workflows themselves are becoming more integrated. When teams build automations, dashboards, or operational processes, they don’t want to stitch together half a dozen systems to get a complete picture.”
Rotem Cohen Gadol, Co-Founder & CTO at Seemplicity
“In 2026, the organizations that thrive won’t be the ones playing catch-up; they’ll be the ones that anticipate the perfect storm hitting cybersecurity: a surge of AI-driven tools and threats, exploding noise, and tighter budgets that make every decision a high-stakes bet. The volume of risks is skyrocketing while resources are shrinking, and agentic AI will amplify the danger. Agent-to-agent architecture, non-human access to systems, and data far beyond human awareness will expose blind spots that could be instantly exploited.
“Attackers only need to succeed once, while defenders must protect everything. Right now, AI gives attackers the edge since it’s not clear what and how to defend. The organizations that will succeed in 2026 are those that deploy automation strategically to cut through complexity and reduce risk in measurable ways, invest in visibility and data leak prevention by leveraging AI to uncover what humans can’t, and treat AI as a force multiplier rather than a magic bullet. This means training teams, building trust, and understanding its limits before it becomes a blind spot. Those who embrace these approaches won’t just survive, they’ll redefine what ‘secure’ looks like in an AI-driven world.”
George Gerchow, Chief Security Officer at Bedrockdata.ai and IANS Researcher
“AI-generated data sprawl will trigger the first major breach from forgotten ‘data exhaust.’ The tipping point for data sprawl is already here, lurking in lower-level development environments such as QA sandboxes and integration systems. In 2026, we’ll see the first major breach directly attributed to AI-generated “data exhaust” that nobody inventoried: a forgotten vector database or prompt log from an abandoned pilot, left open with customer data or secrets exposed. Organizations are multiplying derivative data faster than they can track it. The solution requires treating AI exhaust as Tier 1 data with mandatory lineage tags and time-to-live (TTL) policies at write, implementing 30-60 day default retention in lower environments, restricting access with short-lived credentials, and purging orphaned artifacts monthly. Visibility must be designed in from the start, or organizations will lose control entirely.”
Dan Graves, Chief Product Officer at WitnessAI
Human-in-the-Loop Safety Mechanisms Will Fail Due to “Alert Fatigue” and “YOLO Mode.”
“In 2026, the ‘human-in-the-loop’ safety mechanism that many organizations are relying on to control AI agents will largely fail due to approval fatigue. Companies implementing agents will initially require human approval for every action, asking users to click ‘approve’ before the agent deletes files, modifies code, or accesses systems. However, users will quickly be bombarded with thousands of permission requests daily, leading them to mindlessly click through approvals or enable ‘auto-approve’ features to avoid constant interruptions.
“We saw this happen with security alert fatigue, where users became desensitized to warnings and began automatically dismissing them. The agents themselves will offer ‘YOLO mode’—you only live once—settings that bypass approval requirements entirely, and overwhelmed users will gratefully accept. What starts as a safety mechanism designed to maintain human oversight will evolve into a checkbox exercise that provides false comfort but no real protection. Organizations will discover too late that their carefully designed human-in-the-loop controls were defeated not by sophisticated attacks, but by the simple human tendency to streamline annoying workflows. Agents will be operating with minimal supervision despite policies suggesting otherwise.”
AI Agents will Become Internal Threat Vectors, reminiscent of a “Manchurian Agent.”
“The year 2026 will witness the first major security breach caused by an AI agent operating with legitimate human credentials being exploited by external attackers. In this ‘Manchurian agent’ scenario, autonomous agents living inside corporate networks can be activated or manipulated by hackers to cause unprecedented damage.
“Unlike traditional cyber-attacks that require penetrating network perimeters, these compromised agents already possess the keys to the kingdom. They operate with the over-provisioned credentials of the employees they represent, wielding permissions that were never designed for autonomous systems. When a hacker takes control of an agent acting on behalf of a senior executive, that agent can take down core retail sites, disable banking systems, or demand millions in ransom—all while appearing to be legitimate internal activity. The speed and scale of potential damage will be unlike anything enterprises have faced before because existing security controls were never designed to distinguish between a human employee and their compromised agent.”
Michael Gray, Chief Technology Officer at Thrive
The AI Bubble Bursts
“The artificial intelligence sector is heading for a sharp correction: less of a bubble burst and more of a tidying-up. Companies will need to take a step back to understand where they are in their AI deployment and master what they have before they scale.
“The surge of infrastructure startups racing to build foundation models will give way to mass consolidation as major players absorb or outlast them. By contrast, domain-focused AI—tools built for specific industries, such as healthcare, legal, and cybersecurity—will thrive by delivering tangible, measurable, and real-world results. The winners of this next phase won’t be those with the largest models, but those that prove the clearest value. Consolidation will usher in a more disciplined market, where substance and specificity finally outweigh hype.”
Scott Gregory, CISO at Sonar
Reversing Alert Hyperscale to Restore Accountability.
“The single biggest security implication for developers that comes from AI is alert fatigue at hyper-scale. Developers are now responsible for reviewing a larger volume of code, and the sheer number of potential issues, both subtle and complex, can be overwhelming.
“CISOs should view the problem from the eyes of developers in understanding the challenges being faced. With the increase in alerts, we’re also seeing a shift in developer accountability. As an industry, we must understand that AI doesn’t reduce personal accountability; it reframes it. Accountability with AI shifts from ‘Did you write this code well?’ to ‘Did you validate this code effectively?’ Committing AI-generated code is the same as committing your own. The developer must remain responsible for what they merge. Scaling with AI simply demands automation to make those reviews fast, effective, and manageable. Effective implementation of CI/CD pipelines leveraging trusted tooling to verify at scale is needed.”
Elyse Gunn, Chief Information Security Officer at Nasuni
The Casino Approach to Risk and Competitive Advantage.
“Organizations that truly embrace risk in 2026 as part of their wider operating strategy will strengthen their cybersecurity and boost their competitive advantage. By taking a scientific approach to understanding, measuring, and monitoring risk—right across a business, from competitive strategy to security posture—cybersecurity will become an asset, delivering wider benefits and a stronger market position.
“Take the example of the casino industry: casinos are in the business of managing and monitoring risk all day long, in every aspect of operations, from the buffets to the card tables, yet there’s the understood concept that ‘the house always wins.’ The secret lies in the casino industry’s risk thinking, which involves taking a systematic approach to measuring the benefits and costs of risk, so they know exactly when to deploy resources across their operations to achieve a high financial reward, and more importantly, exactly where they should not.
“This year, CISOs must apply this rigorous casino thinking to cybersecurity operations to strengthen their organization’s ability to innovate, when external risks are increasing, but funding resources may be limited or declining. For example, while the benefits of dedicated security system access for the company’s C-level team may carry high costs, risk profiling shows that enhanced company-wide security access controls can maximize risk reduction across the business, while also unleashing more individual contributors’ productivity. It’s the CISO’s responsibility to ensure that in these operating areas where there’s a very clear value-add to the wider organization, these game-changing opportunities don’t go begging.”
Ysrael Gurt, Co-founder and CTO at Reflectiz
In 2026, AI’s inexperience will cause a big resurgence of old security risks that haven’t been prominent for years.
“Vibe coding exploded in 2025, but AI is not prioritizing code safety. The ultimate goal of vibe coding tools is to build applications that are functional, not applications that are safe and secure. As more enterprises encourage non-coders to build niche tools and website modules via AI, we’ll see a rise in breaches caused by easy-to-avoid risks that were made obsolete by seasoned programmers and security teams years ago. Neither vibe coders nor the AI that’s building their code are thinking about these well-known risks while producing the prompts – and even seasoned developers can overlook these issues if they are not reading and checking the produced code. If 2025 was the year of ‘AI can do anything,’ reality will set in next year: AI is extraordinarily naive and makes decisions without considering security risk the way a human can.”
Next year, ‘naive AI’ will wreak security havoc via agentic use cases, causing new worlds of opportunities for hackers.
“We trust AI agents infinitely more than we should—and AI agents trust the information they’re given a lot more than they should, and lack context to understand what is suspicious. We tend to throw as much data as possible at AI agents to get the best possible result, but the more data they have, the wider the attack surface and resulting exposure. Because a smart attacker can outsmart an AI agent. Humans have common sense that helps them recognize misleading information, but AI will take a website at its word when it calls itself’“the safest place on earth.’ As use of agentic AI becomes common in the enterprise, users will supply more information to agents, who will in turn make more decisions based on increasingly complicated data, thus expanding the potential for misunderstanding and widening the attack surface.”
Matt Hartman, Chief Strategy Officer at Merlin Group
“There’s no question that cryptanalytically relevant quantum computers pose a serious threat to global data security. While this technology remains on the horizon, organizations must begin preparing now. Nation-state adversaries, most notably the People’s Republic of China, have already begun operating under a ‘harvest now, decrypt later’ mindset. Simply put, the specific timeline for quantum is immaterial–organizations’ sensitive data is at risk today. This is not only a data security imperative, but also a national and economic security imperative that could shape the balance of global power in the digital age.
“As artificial intelligence and automation accelerate innovation, they also amplify the speed and scale of cyber threats, making secure and resilient cryptographic systems essential. And while it may feel overwhelming to security teams who are knee-deep in addressing ‘today’s problems,’ there are steps that must be taken today. Organizations should begin by conducting an automated inventory and discovery scan to gain full visibility into their cryptographic landscape, ensuring awareness of privileged assets, addressing current weaknesses, and guiding a well-informed transition to PQC. Transitioning to PQC isn’t a technical luxury; it’s an urgent business, economic, and national security priority. Organizations that act early will lead in the era of quantum resilience, while those who delay may find themselves attempting to defend the undefendable.”
Brad Hibbert, COO & CSO at Brinqa
2026 will be the year exposure management shifts from reactive reporting to truly agentic intelligence.
“Security teams continue to struggle with incomplete data, conflicting sources, cloud sprawl, nonstop change, and an overwhelming volume of security signals. Most programs are still built on inconsistent or unreliable datasets, which quietly erode prioritization, slow remediation, and prevent meaningful risk reduction. The limitations of manual processes and static tooling have become impossible to ignore, and organizations now recognize that the foundational problems of data quality, noise, and complexity cannot be solved at the human scale.
“In response, demand for agentic AI will accelerate. Organizations no longer need more dashboards or scanners but AI that actively participates in the exposure management lifecycle. Agentic AI can reduce noise by interpreting context, identifying and filling data gaps before they stall workflows, reconciling fragmented intelligence, maintaining continuously trustworthy data, and simplifying complex cross-team remediation tasks. This is not a preference but a necessity, as the volume and variability of exposure data now exceed the capacity of traditional processes to maintain accuracy or momentum.
“By the end of 2026, the most successful programs will rely on agentic AI as the engine of trustworthy and continuous exposure intelligence. Prioritization will become clearer, routing will be more reliable, remediation will be faster, and analytics will be more actionable because the underlying data will finally be complete and consistent. This marks the transition from exposure visibility to exposure intelligence, and ultimately toward exposure autonomy. The market is not only ready for this evolution–it is demanding it.”
David Higgins, Senior Director of the Field Technology Office at CyberArk
Misinformation, AI, and the Erosion of Critical Thinking
“Widespread use of AI-driven information tools will make it easier than ever to find answers – but at the cost of eroding critical thinking skills. As people increasingly rely on AI to provide instant responses, the imperative to evaluate sources and exercise judgment will diminish. This creates fertile ground for social engineering and misinformation campaigns, as malicious actors can flood the internet with false narratives that AI systems may inadvertently amplify. The risk is not just technical, but societal: as users become less accustomed to questioning the validity of information, organizations will face new challenges in defending against manipulation and accidental breaches caused by misplaced trust in AI-generated content.”
Economic Hardship Will Drive a Surge in Opportunistic Insider Threats
“In 2026, the insider threat will shift from disgruntled employees to risk from staff tempted by direct financial incentives offered by cyber-criminal groups. As cost-of-living and economic pressures mount, threat actors will offer substantial bounties for legitimate access credentials, especially for high-value organizational targets, shifting the insider threat from a matter of personal grievance to one of opportunistic gain. The traditional view of the “malicious insider” as a lone, disgruntled actor is being replaced by a more complex reality: financially motivated insiders, sometimes acting in concert with organized cyber-crime and nation states, will become a primary risk vector for breaches.”
Matt Hillary, SVP of Security and CISO at Drata
Shadow AI must be confronted.
“In 2026, shadow AI won’t just be a nuisance. Expect more discovered and disclosed instances where shadow AI is traced back to trust-impacting incidents. Just as shadow IT reshaped the risk landscape a decade ago, employees today are already turning to unsanctioned AI tools, models, and agents to accelerate their work. This trend will only grow as pressure mounts to move faster, do more, and be more productive. The result will be sprawling risks: potential data leaks, noncompliance, privacy implications, security blind spots, unanticipated actions taken by AI agents ultimately attributed to the accountable human, and blurred lines of accountability when AI goes wrong. Companies will need to fundamentally rethink their governance, visibility, and culture to stay ahead. Shadow AI is not a side issue. It’s the next frontier of enterprise chaos, and only those who prepare now will survive the reckoning, or else see these risks become reality.”
AI will write (and break) compliance programs.
“Next year we’ll see something wild: AI systems drafting, updating, and mapping entire control frameworks and risk registers–while other AIs are simultaneously probing those same frameworks and registers for weaknesses faster than any auditor ever could. The compliance battlefield is about to become AI vs. AI. The promise is efficiency: instant control mappings, auto-generated documentation, and real-time evidence and risk updates. The risk is existential: malicious models can find control gaps, manipulate policies, or fabricate deepfake attestations that appear perfectly legitimate. The next wave of breaches won’t start with a human mistake – they’ll start with a machine misunderstanding. The smart move? Build ‘AI assurance’ into GRC programs now. That means validation, explainability, and synthetic data risk monitoring baked into every layer. If compliance is about trust, then AI assurance will be the new trust currency. Whoever masters it first will define the rules of the game.”
The CISO as the new “Chief Trust Officer.”
“In the coming year, the CISO will have officially outgrown the traditional ‘protector’ role and stepped into something larger: the Chief Trust Officer of the enterprise. Their job won’t stop at defending against threats or maintaining compliance – it will expand to proving trust as a measurable, revenue-driving asset. Forward-looking CISOs will sit shoulder-to-shoulder with CEOs, quantifying how their programs fuel growth, build credibility, and win deals. They’ll reshape the perception of security and GRC from a cost center into a competitive differentiator. In a market where customers demand transparency and regulators demand accountability, the CISO won’t just be a guardian of systems; they’ll be the architect of trust itself – and that trust will become the most valuable asset a company can utilize. If you’re a CISO, start claiming that turf before others do. Trust is the evolution of security and GRC, not the replacement.”
Nadir Izrael, CTO and Co-Founder of Armis
“Over the past year, we’ve witnessed an unprecedented acceleration in the sophistication of cyber threats. AI has moved from being a tool in the defender’s arsenal to a weapon in the attacker’s. Nation-states and organized cyber-criminal groups are now deploying AI to discover zero-days, launch automated exploitation chains, and mimic human behavior at a scale and speed we’ve never seen before. The rise of AI-powered malware and state-sponsored chaos is no longer a prediction—it’s our reality.
“For 2026, the key challenge is clear: we must build security systems that don’t just react but anticipate. Traditional controls and reactive defenses are not enough. What’s required now is continuous, intelligent proactive protection that can adapt in real-time, spanning IT, OT, IoT, and medical devices across physical, cloud, and code environments.”
Brad Jones, Chief Information Security Officer at Snowflake
Cyber Agents Will Become Weapons in the Next Wave of Cybercrime.
“The cybersecurity arms race has always been defined by the constant push and pull between attackers and defenders, but the rise of AI agents capable of researching, devising, and executing attacks will tip the balance in alarming ways. By 2026, agentic cyber-crime will become a frontline problem with defenders facing a new class of adversary.
“One of the biggest risks associated with AI agents is prompt injection—adversaries tricking systems into bypassing guardrails—and hallucinations that generate false or misleading outputs. We can expect to see agents that will look at code, find a vulnerability, and custom-build exploit kits to exfiltrate data and deploy ransomware. We’ll also see cases of AI creating sales documents or security claims that don’t exist, putting companies at risk of legal penalties. But this is only the beginning. The real inflection point will come when agents stop simply imitating attackers and begin creating entirely new strategies—and that’s when defenders will be facing a whole new level of trouble.”
Cyber-criminals Will Harness Dark AI to Scale Attacks.
“While today’s foundational models are built with guardrails, malicious actors are already deploying uncensored versions like FraudGPT and WormGPT to generate phishing campaigns, malicious code, and social engineering attacks. In many ways, this represents the dark side of open-source, as malicious actors exploit open-source models like GPT-J-6B without the ethical guardrails typically found in commercial systems. AI-enhanced tools will rapidly become part of the supply chain, fueling cybercrime-as-a-service. This underground economy will no longer rely on individual attackers, but on global businesses that package and sell the infrastructure of cyber-crime—complete with subscription tiers, customer support, and regular updates. As these offerings mature, even the most advanced and expensive AI models will inevitably be weaponized.”
AI Tools Will Close Cybersecurity’s Long-Standing Talent Shortage.
“For all the risks AI introduces, it also carries real promise for defenders. Generative and agentic AI will begin to provide security operations centers with the scale they’ve been missing. The most persistent problem for CISOs has been the shortage of skilled analysts. Security talent is hard to find and hard to retain. Instead of replacing human expertise, advanced AI will fill the security roles that have gone unstaffed for years, augmenting analysts and evening the playing field. Within the next three years, AI agents will provide the force multiplier needed to finally tip the balance back toward defenders.”
Sohrob Kazerounian, Distinguished AI Researcher, Vectra AI
2026 AI-Powered Attack Techniques
“Attackers are still not at the point where they will trust AI to run end-to-end autonomous attacks in critical scenarios, but that doesn’t mean it isn’t actively being explored. End-to-end attacks will begin to occur, though most high-profile hacks will only make use of LLMs in highly guard-railed scenarios in order to prevent detection. Remember: Attackers are early adopters! And they aren’t restricted by legal departments concerned with Intellectual Property (IP), Personally Identifiable Information (PII), and data governance. They are exploring the uses of AI in their operations, at a rapid scale.”
TK Keanini, CTO at DNSFilter
AI will become the ultimate cyber adversary.
“In 2026, AI will be a force multiplier that enhances old threats while creating new ones. Classic attacks like phishing will be perfected, and we will see the rise of flawless, deepfake-powered phishing and AI that chains minor bugs into major breaches at machine speed. This will give way to autonomous attacks, in which an adversary states an objective and an AI agent achieves it by rewriting its own code to bypass our defenses in real-time. Our only response is to fight fire with fire. We must accelerate our shift to AI-driven behavioral detection, a strict Zero Trust architecture, and phishing-resistant identity controls as our last line of defense.”
Security culture will replace security policy.
“Employees will no longer be passive participants in cybersecurity. Every individual will be expected to act as a proactive defender—always in a state of readiness. Cybersecurity will shift from being a compliance checkbox to a shared mission. The most secure organizations will be those where employees see themselves as partners to the cybersecurity team, not just policy followers, creating a culture of performance-driven defense.”
Authenticity becomes the new pillar of security.
“We’re about to see the death of ‘seeing is believing,’ because deepfakes will soon detonate trust itself. As AI blurs the line between truth and fabrication, attackers will weaponize human psychology (including urgency, authority, and social proof) through flawless fake voices and faces. The classic CIA triad of ‘Confidentiality, Integrity, Availability’ can no longer hold the line. In 2026, authenticity will emerge as cybersecurity’s fourth pillar, defining the next era of digital trust.”
Doug Kersten, Chief Information Security Officer (CISO) at Appfire, and Member of the Board of Advisors at SurePeople
Security in 2026 Will Depend on Clarity and Accountability
“In 2026, cybersecurity will depend on how well organizations understand their own environments. The biggest risks will not come from new forms of AI, but from weak visibility, disconnected systems and processes, and poor accountability between teams. New risks from AI will be less impactful than expected. Instead, AI will amplify existing risks, making strong adherence to security best practices more important than ever.
“Many companies still treat security as a technical problem instead of a business function. That approach no longer works. Most incidents begin when teams move ahead with new tools, vendors, changes, or AI systems without coordination across IT, security, legal, and procurement. The result is confusion about what data is stored and used, who has access to it, and how it is protected. The organizations that lead will bring discipline and balance to these basics. They will track the existing systems and data, confirm who owns them, and ensure that every team is aware of the rules for using and securing data. Security will become part of daily operations, not a separate function that reacts when something goes wrong. AI will become a key component in amplifying information security’s ability to collaborate and act as an effective business partner to ensure trust and success. The companies that ensure clarity of purpose, connectedness, and accountability will be the ones that stay secure.”
John Kindervag, Chief Evangelist at Illumio
CEOs, not CISOs, will be held accountable for cybersecurity failures.
“In 2026, executive accountability for cyber incidents will finally land where it belongs: in the boardroom. For too long, CISOs have taken the fall for breaches they could not prevent because they lacked authority, resources, or budget. They get blamed while the people who make the real financial decisions walk away untouched. That era is ending. The CISO role is actually one of an advisor. Most security leaders cannot sign checks, set strategic priorities, or force compliance. They can only warn executives of the risks, and too often those warnings go unheeded until after the damage occurs. CEOs, on the other hand, own the business risk. They approve budgets, set incentives, and decide whether to invest in prevention or accept exposure. If a company is breached because leadership underfunds security or ignores Zero Trust principles, that is not the CISO’s fault. It is a leadership failure.
“We will see performance contracts and compensation structures that tie executive pay to measurable cybersecurity outcomes. That accountability will prompt CEOs to issue ‘executive orders’ within their organizations, making it clear that cybersecurity is not optional, that Zero Trust is the standard, and that prevention without containment is insufficient. Only when the people at the top start feeling the consequences will cybersecurity mature from a cost center into what it truly is: the structural engineering of modern business.”
Alex Kreilein, Vice President of Product Security & Public Sector Solutions at Qualys
“The Trump administration’s approach will likely emphasize deregulation and private sector-led solutions over prescriptive federal mandates. We’ll probably see a few outcomes:”
Pressure on existing frameworks.
“FedRAMP and CMMC will face scrutiny around cost and compliance burden. Expect attempts to ‘streamline’ or consolidate these programs. There is likely to be short-term uncertainty for vendors in authorization pipelines and the cottage industry of consultants, auditors, and SaaS products that support those markets.”
CISA’s evolving role.
“The agency may shift from regulatory expansion toward more voluntary collaboration with critical infrastructure. The Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) implementation could be softened or delayed. It’s also likely that much of CISA’s private-sector and state-facing programs will be gutted. For sure, there will be no more support for election security initiatives.”
Digvijay Lamba, Chief Product and Technology Officer at OneTrust
Accountability apathy: Prioritizing AI responsibility becomes paramount
“Governance isn’t a checkpoint anymore; it’s a circuit breaker built into the pipeline. In 2026, accountability-in-the-loop will be the standard for high-risk AI, making approvals and audit trails as integral as code commits.”
Mark Lambert, Chief Product Officer at ArmorCode
AI Exposure Management Becomes a Core Security Discipline.
“Organizations face a growing challenge of both uncovering where AI is embedded across their systems and de-risking AI-generated code. Traditional AppSec tools cannot detect vulnerabilities like prompt injection or model poisoning, leaving critical blind spots. In 2026, using AI exposure management to map usage and correlate AI-specific risks with traditional findings will become essential. Companies that cannot answer where AI lives in their stack will face uncomfortable questions from boards and regulators alike.”
The Cyber Resilience Act Will Blindside Global Software Makers.
“Many companies still do not realize that the CRA applies to them, yet compliance deadlines are rapidly approaching in 2027. Any organization selling software into the EU must soon prove continuous vulnerability management, maintain SBOMs, and be able to meet rapid disclosure timelines. In 2026, the rush to comply will expose massive visibility gaps in software inventories and supply chains. AI-generated code will add another twist, forcing organizations to prove provenance and control they currently lack.”
Security Shifts from Risk Posture to Exposure Prioritization.
“Security teams are moving beyond static vulnerability lists to focus on dynamic exposure management. Understanding exploitability against their attack surface, business impact, and real-time context now matters more than raw CVSS scores. Next year, the vendors who win will correlate and prioritize correcting their greatest exposure points across AppSec, CloudSec, AI security, and supply chain findings.”
Lavi Lazarovitz, Head of CyberArk Labs
Threat actors continue to focus on post-auth techniques.
“In 2026, attackers will increasingly focus on stealing the digital ‘keys’ that grant access to sensitive systems and data. For human users, this means targeting browser cookies – small pieces of data that keep users logged in to websites and applications. By stealing cookies, attackers can hijack active sessions, bypassing passwords and multi-factor authentication entirely. This technique is especially dangerous as it allows attackers to impersonate users without triggering traditional security alerts.
“For services and machine identities, the equivalent targets are API keys and access tokens. These credentials are used by software, bots, and automated systems to authenticate and communicate securely. If an attacker obtains an API key or access token, they can gain unauthorized access to critical systems, manipulate data, or disrupt operations – often without immediate detection.
“We see this through a ‘credentials-stealing prism,’ whereby the evolving mindset of threat actors means that, rather than breaking through technical defenses, they increasingly seek to steal the very credentials that grant legitimate access. This shift means that both human and machine identities are at risk, and organizations must prioritize the protection, monitoring, and rapid revocation of these authentication materials to prevent breaches and limit damage.”
Chad E. LeMaire, CISO at ExtraHop
AI Will Be Integrated in the SOC, and Guardrails Will Become a Cybersecurity Imperative.
“After a year of rapid experimentation, security leaders have realized that the biggest challenge with AI integration isn’t capability but control. Many expected AI to be fully integrated in the SOC by now, but rushing to deploy created more risks than efficiencies. As 2026 may very well be the year when agentic AI is integrated into the SOC, a lack of clear governance, oversight, and testing frameworks will inadvertently expand its attack surface rather than reduce it. In 2026, AI is no longer a pilot project as it’s embedded in every layer of the digital enterprise. With new standards like ISO 42001 emerging, it’s clear that AI must be treated as part of overall cyber risk, not a standalone innovation.
“Every AI system interacts with your network, your data, and your people, which means it falls squarely within the CISO’s domain. As we enter the new year, CISOs who recognize this shift and take ownership of AI as a security imperative will lead the way. They’ll move beyond enablement to enforcement, prioritizing ethical testing, continuous validation, and adversarial simulation to ensure AI strengthens rather than undermines defense. The next year of cybersecurity won’t favor those with the most advanced AI models, but those with the smartest and most secure guardrails around them.”
From Smash-and-Grab to Strategic Infiltration: The New Era of Ransomware Tactics
“The continuum of ransomware tactics will range from quick, opportunistic ‘smash-and-grab’ attacks to calculated, strategic operations—this will make post-compromise detection, granular visibility, and threat intelligence a priority in 2026. While the overall number of incidents may be decreasing, the sophistication and financial impact of each attack are rising sharply. Groups like LockBit are spending more time moving laterally within networks before deploying malware, maximizing leverage and potential ransom demands. Nation-states, on the other hand, are dwelling quietly inside systems, waiting for the most advantageous moment to strike—often combining data theft, extortion, and operational disruption.
“This evolution underscores a shift from volume-based attacks to high-value, precision strikes. Defenders must therefore broaden their defenses beyond endpoint protection to include comprehensive visibility and rapid detection of lateral movement. As attackers grow more patient and methodical, proactive threat hunting and intelligence-driven defense will be essential to counter the next wave of ransomware campaigns.”
April Lenhard, Principal Product Manager at Qualys
Attack-path modeling and risk-prioritized operations will gain traction in 2026.
“2026 is the year attack-path modeling grows up, and the year CTEM gets sidelined by the Risk Operations Center (ROC). Attack paths will transition from static graphs to digital cyber ranges, powering red teaming and real-time ‘what-if’ or ‘now-what’ simulations. Wargaming has ignored the cyber element for a long time, so cybersecurity will instead start incorporating wargame elements on a larger scale. Secondly, we will start to see a wider industry shift from counting assets to risk-prioritized operations, where informed triage eliminates noise, saves resources, and focuses teams on what truly matters when it matters.
“Federal cyber policy will push for resilience in AI, quantum, and national security, led by the private sector. Despite polarization in other domains, cybersecurity remains one of the few policy arenas with strong bipartisan alignment, especially around nation-state threats and the crucial need for more robust national resilience. That consensus will continue to anchor and drive 2026 federal cyber policy. At the same time, the government pulling back from sustained open dialogue creates a vacuum: so private-sector leaders and academia will have greater influence over priorities, norms, standards, and best practices than before. Evergreen needs like rapid incident reporting, info sharing, and system modernization will remain constant: new to the scene will be the convergence of AI, quantum, and cyber policies within legislation, and each becoming more intertwined within both national security and economic competition.”
Dan Lohrmann, Keynote Speaker, Author, and Field CISO with Presidio
“By 2030, there will be a major worldwide online disruption caused by a cybersecurity incident that will impact businesses, financial markets, and governments well beyond the scale of the software outages we’ve seen over the past couple of years. However, ransomware resilience is already improving. We’re seeing faster responses, stronger backups, and C-suite involvement, but attackers are improving just as quickly. I hope that by the time an attack on this scale takes place, companies and governments will have prepared and be ready to respond and recover.”
Pete Luban, Field CISO at AttackIQ
Supply chains will become the #1 access point for adversaries.
“When the SolarWinds attack hit in 2020, few realized it would mark the start of a new era in cyber risk. Five years later, the ripple effects are everywhere. Attackers have learned a simple truth: why break into 1,000 companies when you can hit one trusted provider and reach them all?
“In 2026, that playbook will reach its peak. Adversaries are turning their focus to the glue that holds modern business together, from software dependencies to service providers and integration platforms that connect entire ecosystems. One compromise in that chain can expose thousands of organizations overnight. This will be the year companies either gain true visibility into their supply chains or keep learning the hard way what blind trust costs. Annual questionnaires won’t cut it anymore. The lesson will be simple: you can’t secure what you can’t see.”
Cybercrime will go corporate.
“Adversarial groups will operate more like legitimate enterprises than underground networks in 2026. We’re already seeing Scattered Spider, RomCom, and the Lazarus Group establish corporate-level structures, complete with R&D cycles that treat each breach as a learning opportunity. They’re iterating, refining, and returning with better tactics. Add AI-powered reconnaissance and attack automation, and you’ve got adversaries who can adapt faster than most defenders can patch.
“The scary part isn’t just their sophistication, it’s their efficiency. The organizations that keep up won’t necessarily be the ones spending the most, but the ones treating threat intelligence as a business function. The winners will learn from every incident, operationalize those lessons quickly, and out-innovate attackers at their own game. In 2026, cyber-crime becomes industrialized, and survival depends on learning faster than the enemy.”
Shahar Man, CEO and Co-Founder of Backslash Security
A Major MCP Breach Will Redefine the AI Coding Threat Model.
“In 2026, the first large-scale breach originating from an MCP will occur. A backdoor or supply chain poisoning attack will quietly embed malicious code into enterprise environments, spreading through AI-driven development workflows before anyone detects it. This will happen because innovation in AI coding is outpacing the security models designed to contain it, and existing guardrails were designed for human developers, not autonomous systems that can write, modify, and deploy code independently. When this breach comes to light, it will expose how deeply enterprises have trusted these agents without sufficient oversight. The result will be a new era of governance where organizations treat MCPs as code contributors that must be monitored and verified like any other developer.”
AppSec Will Evolve From Static Defense to AI-Aware Security.
“Traditional AppSec remains an important, if unsung, part of enterprise security strategies. However, traditional AppSec tools, such as SAST and SCA, are not comprehensive enough to defend against new threats caused by AI-generated code. These tools were built to detect known vulnerabilities and code patterns, but were not meant to understand how or why code was produced. With AI systems introducing new risks and dynamically generating or altering code, static scanners can’t keep pace with their evolving behavior. So, as vibe-coding reshapes how software is written and modified, security will need to move beyond understanding just what AI-generated code does, but also how it came to be. The next generation of AppSec will merge traditional controls with continuous, AI-aware context, securing both human and machine-written code in real-time.”
Derek Manky, Chief Security Strategist & Global VP of Threat Intelligence at Fortinet
The AI Arms Race in 2026
“AI is accelerating the tempo of cyber conflict. Offensive models are already identifying and exploiting weaknesses in defensive systems faster than human analysts can respond. The result is a continuous feedback loop of adaptation between the attacker and the defender. Detection, containment, and mitigation must increasingly be automated, as a human-led response alone cannot match the speed of machines.”
GenAI will accelerate data monetization and extortion.
“GenAI will become more central to post-compromise operations. Once attackers gain access to large datasets (through infiltration or by purchasing access on the dark web), AI tools will analyze and correlate massive volumes of data in minutes, pinpointing the most valuable assets for extortion or resale. These capabilities will enable adversaries to identify critical data, prioritize victims, and generate tailored extortion messages at scale. By automating these steps, attackers can quickly transform stolen data into actionable intelligence, increasing efficiency and profitability. For defenders, this trend underscores the importance of integrating SecOps capabilities, such as NDR, EDR, and CTEM, to detect unusual data movement and flag early signs of AI-assisted extortion before damage escalates.
Stephen Manley, Chief Technology Officer at Druva
David Matalon, CEO and Founder at Venn
The Next Endpoint Revolution: Privacy Will Replace Control.
“In 2026, endpoint security will undergo a philosophical shift from control and surveillance to privacy and transparency. This will require architectural changes in endpoint security that focus on the containerization of sensitive assets versus full device control, largely driven by the need to enable extended workforces, including contractors, consultants, and offshore teams. Traditional approaches that lock down entire devices will increasingly clash with employee expectations for personal privacy, especially as BYOD and hybrid work become the norm.
“Organizations that prioritize securing data rather than monitoring devices will not only reduce risk but also foster trust, flexibility, and productivity. Regulators will increasingly treat employee privacy as a core component of compliance, making privacy-preserving security strategies a business imperative. Companies that embrace this shift will be better positioned to attract and retain top talent while protecting critical information in a distributed, modern workforce.”
Remote Work Technology Will Reveal RTO Mandates as Smoke and Mirrors.
“The productivity and security tools that made remote work seamless and secure have removed any doubt: remote teams are thriving. According to an October 2025 article by Inc., ’83 percent of companies with remote-friendly work policies report high staff productivity.’ Leaders can no longer hide behind the notion that employees must be in an office to produce results – the data already tells the story. As remote work technology evolves, the illusion that physical presence equals performance will continue to fade, exposing RTO policies for what they really are: smoke and mirrors.
“In 2026, the corporate push to bring everyone back to the office will hit a breaking point. Productivity metrics, attrition rates, and employee sentiment will make it clear that rigid five-day RTO mandates are out of step with modern work. At least one major Fortune 100 company will be forced to publicly reverse its mandate after seeing top talent leave and output decline. Flexibility is no longer a perk: it’s a prerequisite. The future of work will favor organizations that trust their people and secure how work happens, not where.”
Chaim Mazal, Chief Security Officer at Gigamon
Visibility is the New Perimeter for Cybersecurity
“As organizations accelerate AI and cloud adoption, the traditional concept of a cybersecurity perimeter has all but vanished. What will define our industry in 2026 is complete visibility: understanding exactly what’s running across your systems, how it’s configured, and where data is moving. As architectures grow more complex and distributed across public cloud, private cloud, and containers, the blind spots multiply, and that’s where risk escalates. True resilience won’t come from adding more tools or agents. It will come from achieving real-time observability through network-derived telemetry and APIs, so teams can detect anomalies before they escalate. You can’t defend what you can’t see, and in this new era, visibility isn’t just a capability; it’s the foundation of trust.”
Speed Over Safety is the Next Cyber Battleground.
“Every tech revolution starts the same way: speed wins over safety – until it doesn’t. With the flood of new AI capabilities coming to market, from Aardvark’s adaptive agents to multimodal systems like Sora and Atlas, we’re entering a phase where it’s increasingly difficult to tell what’s real and what’s fabricated. Deepfakes, synthetic code, and autonomous threat actors will blur the line between benign automation and weaponized AI. These advancements are empowering attackers to breach networks faster than defenders can respond, proving once again that prevention as a primary strategy is dead. In 2026, cybersecurity headlines won’t just chronicle new threats; they’ll expose how innovation outpaced risk management. The organizations that will endure aren’t the fastest adopters – they’re the ones disciplined enough to pause, assess their exposure, and strategically build guardrails around their AI and cloud deployments to survive the next wave of attacks.”
Shachar Menashe, VP of Security Research at JFrog
The “Vibe Coding” debt crisis will force a return to human-led, AI-assisted security rigour.
“The normalization of ‘vibe coding’ will lead to a quality crisis in 2026, introducing a flood of lower-quality code with hidden vulnerabilities. While AI excels at identifying low-hanging fruit vulnerabilities, it still struggles with deep bugs, as evidenced by recent failures to autonomously exploit complex vulnerabilities like React2Shell without human guidance.
“Organizations that rely solely on AI-based detection will face a new class of incidents where sophisticated logic flaws slip through the cracks. We predict a split in threat hunting: AI will automate a large amount of routine detection, but the ‘deep bugs,’ some of which will be created by AI-generated code, will require human-led security research. Leadership will need to enforce stricter reviews to ensure that the speed of AI generation does not outpace human verification.”
Jimmy Mesta, Co-founder and Chief Technology Officer at RAD Security
2026 will be the year CVSS scores and “severity” stop meaning anything.
“In 2026, AI systems begin generating and discovering vulnerabilities far faster than researchers can validate them. A single LLM can produce hundreds of ‘plausible’ exploit paths for an existing CVE, only a handful of which work in real environments. Another set of models starts identifying structural weaknesses that look dangerous but fail under real-world constraints. Security teams receive feeds full of ‘critical’ items that have never been exploited and never will be.
“Meanwhile, minor, low-profile misconfigurations become the starting point for AI-generated exploit chains that never appear in any CVE list. The volume overwhelms the old scoring system. CVSS rates theoretical impact, not actual likelihood, and AI shifts the entire landscape from known issues toward synthetic possibilities. By the end of 2026, teams will stop treating severity as a prioritization signal. They focus on runtime behavior, blast radius, identity context, and real exploit paths, because AI made ‘critical’ too noisy to matter.”
Sandra McLeod, Chief Information Security Officer at Zoom
AI will shrink time-to-exploit to hours, forcing CISOs into a hyper-patching era.
“In 2026, AI will accelerate zero-day weaponization from weeks or days down to mere hours. This will push CISOs into a new era of hyper-patching, where reducing patch cycle times and building architectures that support rapid, continuous updates become non-negotiable. Teams that can’t move faster will face significantly higher exposure as attackers automate exploitation at unprecedented speed.”
AI Will Automate Early-Stage Security Work—While PQC Readiness Begins With Cryptography Discovery.
“In 2026, AI will begin to autonomously handle repetitive, checklist-driven tasks, especially in tier-1 SOC support and vulnerability triage. To trust these AI agents, organizations will need simpler ways to build them and set guardrails without requiring deep AI development expertise. At the same time, CISOs will need to accelerate their readiness for post-quantum security. The foundational first step for every organization is to catalog where cryptography is used across their environment, allowing them to properly assess PQC risk and prioritize migration efforts.”
Dave Merkel, CEO and Co-Founder at Expel
“I don’t think we’ll see AI eliminating existing cyber jobs. We may see fewer open positions as companies navigate AI usage for efficiency, but you can bet those open positions will be filled by people who have cyber * AI skillsets. The adversary is still a human being. That human being will figure out how defenders are using AI and exploit those behaviors to their advantage. That will require a human defender to identify and counter.”
Stephen Morrow, Chief Solution Officer at AirMDR
CISOs will gain a formal seat at the enterprise AI budgeting table.
“In 2026, security line items will begin shifting into company-wide AI programs—not just the CISO cost center—as organizations recognize that cyber readiness will be fundamental to AI-driven growth, customer trust, and operational efficiency.
“Board communication will also become a core CISO requirement and a budget line. CISOs will increasingly invest in executive-level communications training as boards expect them to defend AI initiatives, articulate risk in business terms, and secure multi-year funding with greater precision.”
Tool sprawl will give way to accelerated platform consolidation.
“Enterprises will move aggressively to rationalize overlapping stacks, especially across SOC tooling. The push for stronger ROI and more reliable AI-driven interoperability will drive security teams toward unified platforms over one-off tools.”
Hybrid Human + AI will become the default SOC operating model.
“By 2026, AI will handle triage, enrichment, and case drafting in minutes, while analysts set policies and make final decisions. This ‘human-guided autonomy’ model will become standard as organizations seek to scale SOC impact without proportional headcount growth.”
John Morris, Chief Executive Officer at Ocient
Snowballing data loads drive a surge in cybersecurity investment.
“As enterprise data volumes continue their exponential climb fueled by AI, IoT, and real-time analytics, cybersecurity risks are reaching a critical inflection point. In 2025, organizations became more vocal about the threats posed by their sprawling, high-velocity data environments, yet many still have significant infrastructure investments to make if they’re to properly monitor, analyze, and secure growing data streams in real-time.
“This will change in 2026. The sheer volume and complexity of structured and unstructured data that companies are expected to manage and secure will push cybersecurity into the forefront of enterprise business strategy. As data sprawl becomes increasingly difficult to track and protect, and threats emerge with every new system integration, cybersecurity strategies will evolve from perimeter defense to data-native protection embedded directly into data management architectures.
“Companies will begin treating cybersecurity as a core data competency, not just an IT concern. In 2026, protecting the data means protecting the business. As the data load snowballs, so too will the urgency to secure it at scale.”
Corey Nachreiner, CISO at WatchGuard Technologies
The Fall of Traditional VPN and Remote Access Tools Will Lead to the Rise of Zero Trust Network Architecture (ZTNA).
“Traditional Virtual Private Networks (VPNs) and remote access tools are among the top targets for attackers due to the loss, theft, and reuse of credentials, combined with the common lack of multi-factor authentication (MFA). It doesn’t matter how secure VPNs are from a technical perspective; if an attacker can log in as one of your trusted users, the VPN becomes a backdoor giving them access to all your resources by default.
“At least one-third of 2026 breaches will be due to weaknesses and misconfigurations in legacy remote access and VPN tools. Threat actors have specifically targeted VPN access ports over the past two years, either stealing users’ credentials or exploiting vulnerabilities in specific VPN products. As a result, 2026 will also be the year when SMBs begin to operationalize ZTNA tools because it removes the need to expose a potentially vulnerable VPN port to the internet. The ZTNA provider takes ownership of securing the service through their cloud platform, and ZTNA does not give every user access to every internal network. Rather, it allows you to grant individual user groups access to only the internal services they need to perform their jobs, thereby limiting the potential damage.”
Jessica Newman, Global GM of Cyber Insurance at Sophos
The Soft Market Rewards Visibility, Not Guesswork
“In a soft cyber insurance market, carriers will reward visibility more than guesswork. With more insurers competing, buyers may have leverage, but underwriting is rapidly shifting from self-reported checkboxes to hard, technical telemetry. In 2026, insurers will prioritize real proof of performance on security controls like managed detection and response (MDR), endpoint protection, identity security, network security, vulnerability management, email security, and MFA. Insurers are craving telemetry and will prioritize data capture on actions like patch latency, detection/response speed, over simple attestation. Organizations that can demonstrate real-time visibility into these controls will secure the best insurance coverages and terms, while those relying on promises will find that the softness of the market does not apply as much to them. ”
MDR Proves Its Worth in the Insurance Equation
“In 2026, MDR will become a strategic lever for insurability, business continuity, and clear ROI. It will be viewed not only as a security investment but as a quantifiable source of risk reduction in cyber insurance underwriting. Insurers are increasingly recognizing that organizations with 24/7 detection, threat hunting, and rapid response capabilities experience fewer severe losses, and they will reward this maturity with better premiums and broader coverage. AI-driven MDR capabilities will improve accuracy and outcome reporting, combining automation with human expertise to deliver evidence that boards understand and insurers trust. MDR telemetry will provide hard proof of resilience, from full endpoint coverage to rapid containment. As carriers see the financial and operational impact, MDR will solidify its place as both a defensive safeguard and a business asset. “
When Cyber Insurance Starts Watching Continuously
“Cyber insurance is moving toward real-time risk intelligence, where continuous telemetry replaces static annual questionnaires. In 2026, underwriting will become dynamic—policy conditions and pricing will adjust based on live security performance rather than assumptions. As telemetry standards mature, continuous data exchange will become the norm across the industry.”
Recovery and Resilience: Addressing the Dual Challenge of AI-Driven Attacks and Expanded Digital Surfaces
“AI significantly accelerates the pace of attacks and expands the attack surface that malicious actors leverage. There is an increased urgency for CISOs to adopt an ‘assume breach’ mindset and prioritize ensuring data integrity and recovery. When an attack occurs, the time to get a business up and running is the critical metric. However, in 2026, the new imperative is to ensure data integrity and the ability to recover to a verified, clean point quickly. AI tools can rapidly generate malware and exploit known vulnerabilities. Organizations must pivot to recovery strategies that utilize integrity validation and isolated ‘cyber vaults.’ The recovery strategies will guarantee the restored environment is free of malicious code, making robust recovery engines a necessity, not a convenience.
The Great AI Sprawl
“The proliferation of AI agents is creating the ‘great AI sprawl,’ forcing IT and security teams to reconcile rapid deployment with system control. The dynamic will necessitate a governance renaissance in 2026, along with immediate, focused investment to bring agents into production safely and at scale.
“To achieve production-grade agent deployment, organizations must rapidly implement monitoring and governance controls to ensure visibility into which applications or data agents are accessing and that they adhere to corporate policies. Inevitably, agents will make mistakes, and they will need to have remediation strategies in place. Organizations will need to overhaul their current IT and security workforce management. In 2026, heavy investment in robust security and governance systems will be essential to monitor, control, and remediate agent output.”
Greg Notch, Chief Security Officer at Expel
“The attackers have the easier part of the game, always have, always will. It’s asymmetrical warfare with no reasonable path in the private sector for attribution or enforcement of consequences on threat actors. I don’t think 2026 will be any different than previous years. I actually don’t buy the thesis that there aren’t enough available defenders. There are, it’s just that businesses and other entities are either unwilling or unable to make the investments required.
“AI won’t fix this. It might make parts of the security pie more economical to implement, but it will come at a different cost (i.e., the expertise to properly implement it). AI will enable scale for attackers, but its sophistication isn’t necessary in many cases because very simple technical and human-based threats still work. Why use an expensive AI agent when you can just bribe a low-level employee for access or send the CFO’s a phishing email?
“Looking further out, and at the higher end of attacker behavior, I believe we’ll start to see automated weaponization of so-called ‘1day’ or patch diffing vulnerability exploitation. A bunch of promising and scary research around AI-driven development of PoC code from patch diffs means we’ll soon see attackers building software exploits faster than companies can apply patches. This research also significantly lowers the expertise needed to build the exploits.”
Lisa Owings, Chief Privacy Officer at Zoom
Transparency will become the new currency of trust in AI.
“By 2026, transparency won’t just be a compliance requirement; it will be the biggest differentiator among AI-driven companies. As AI capabilities become more complex and omnipresent, customers will reward companies that can clearly explain how their AI systems work, what data they use, and why they make certain decisions. Companies that embed transparency into their products and practices will build deeper, more lasting trust with their customers. Plain and simple, those that say what they do, and do what they say, will win customer trust.”
Agentic AI will deepen the meaning of consent — one micro-decision at a time.
“As agentic AI evolves to handle increasingly complex actions on behalf of users, from scheduling meetings to drafting communications, consent will take on new complexity. The question won’t just be, ‘Do I authorize AI to accomplish this task for me?’ but ‘What decisions have users authorized AI to make as it accomplishes the task?’ Companies that master the complexity of providing the right balance between user control and AI autonomy will earn the greatest trust.”
Everything old is new again in AI Regulation.
“Regulators expect AI to meet long-standing requirements around consumer protection, data governance, transparency, and data minimization. With the power of AI increasing exponentially, applying privacy requirements to the AI world is simple in concept, challenging in execution unless it is included by design. In 2026, we’ll see a shift toward greater alignment between regulators and companies that proactively embed privacy and accountability into their AI systems.”
Cynthia Overby, Director of Strategic Security Solutions, zCOE at Rocket Software
Global regulations tighten and expand.
“Emerging cybersecurity regulations will focus on key areas, including mandatory software bills of materials, ‘secure by design’ principles, and enhanced incident reporting requirements. The EU’s Cyber Resilience Act (CRA) is a key example that will likely have the biggest impact by demanding strict transparency and knowledge sharing, increased protection of data, both at rest and in flight, and greater resilience against ransomware attacks.”
CISOs become key drivers of business growth.
“CEOs must better understand the role of the Chief Information Security Officer and redefine it into a strategic decision-making role, including board-level reporting, to align security initiatives with business growth. This reframes cybersecurity from a defensive cost center to a business enabler that protects brand reputation and shareholder value.”
Dan Pagel, CEO of Brinqa
“AI is about to force a reckoning in cybersecurity. Not because it’s ‘smart,’ but because it’s fast–and when speed is paired with transparency, it exposes everything. It exposes bad data, broken processes, and the teams still trying to run modern programs with 2016 playbooks.
“For years, we’ve thrown more scanners, more dashboards, more tickets at the problem. Yet, every enterprise I speak with is sitting on millions of vulnerabilities, thousands of assets, and almost no real context. The math has stopped working. Humans can’t dig out of that hole, and AI isn’t going to magically save anyone who hasn’t fixed the foundation. The real shift in 2026 will be AI stepping into the operator role. AI that fills in missing attributes, validates signals, connects dots, and tells teams what actually matters. And here’s the part most folks won’t admit: this only works if the underlying data is structured, complete, and continuously updated.
“Teams that get this right will move faster than anything we’ve seen. Teams that don’t will fall further behind than they realize. 2026 is the year cybersecurity stops being a tooling problem and becomes a data and decisioning problem. And that’s long overdue.”
Raja Patel, Chief Product Officer at Sophos
Securing Microsoft Environments Becomes Mission-Critical
“With nearly four million organizations using Microsoft 365, securing Microsoft environments will become a defining line between resilient organizations and those that remain exposed. As attackers increasingly target Entra ID, Microsoft 365, endpoints, and cloud workloads as a single, interconnected attack surface, point defenses will fail to keep pace. Security teams will be forced to move beyond isolated tools and adopt unified visibility across identity, endpoint, email, and cloud activity. Organizations that can correlate Microsoft telemetry in real time and respond with speed and context will blunt modern attacks, while those relying on default configurations and fragmented controls will continue to absorb avoidable risk.”
The Rise of Workspace Security
“Workspace security will become a foundational approach to enterprise defense. As work continues to span devices, identities, applications, and locations, attackers will increasingly exploit gaps between endpoint, identity, email, and SaaS controls. Organizations that treat the workspace as a single, unified security domain, protecting users, data, and access wherever work occurs, will reduce both the likelihood and impact of breaches. Those that rely on fragmented tools will struggle to detect abuse that appears legitimate, missing the early signals that now originate in the workspace.”
Cybersecurity Strategy Delivered as a Solution
“The gap between organizations with a CISO and those without one will become unsustainable. Most organizations will never have the budget or scale for a full-time security executive, yet they will face the same attacks, regulations, and business risks. The winners will be the vendors that embed strategic security guidance directly into their platforms, turning telemetry into priorities, decisions, and outcomes. Instead of dashboards and alerts, organizations will demand clarity: what matters most, what to fix first, and why. Cybersecurity will evolve from tools that generate data to systems that deliver CISO-level judgment at scale.”
Data Will Matter More Than Features in the Cybersecurity Platform Wars
“The effectiveness of cybersecurity platforms will increasingly be determined by the quality of their data, not the quantity of their features. As attacks grow faster and more adaptive, only vendors with true big-data foundations—delivering the four V’s: Volume, Velocity, Variety, and Veracity—will be able to keep pace. Massive telemetry volume enables broad coverage, velocity enables real-time response, variety provides context across attack surfaces, and veracity ensures trust in every signal. Vendors with high-fidelity threat intelligence at this scale will out-learn adversaries, detect emerging techniques sooner, and defend customers against advanced, constantly evolving threats.”
Audian Paxson, Principal Technical Strategist at IRONSCALES
From APIs to Agents: The Next Security Blind Spot Will Cost Organizations Dearly in 2026.
“Today, employees are building AI workflows that connect their CRM, email, collaboration platforms, and financial tools. Sure, these workflows work great, and they automate tedious tasks, but IT has zero visibility. No inventory of what’s connected, no monitoring of data flows, no controls over what these AI agents can access or do. It’s shadow IT with reasoning capabilities = not good. When cloud adoption exploded, API security emerged as a category to solve the visibility problem. AI tool adoption is following a similar pattern, albeit at a faster pace, with higher stakes. These aren’t just data connectors; they’re agents making decisions about how information flows between systems.
“In 2026, IT leaders will finally acknowledge they need to create a lot more visibility into what AI tools are running, what data they can access, and what they’re actually doing with it. Today, most have none of that. Security teams are so focused on whether AI will replace analysts (it won’t) that they’re missing the operational blind spot being created right now.”
Human Insights in Cybersecurity Will Be the Premium Again.
“We’ve reached a point where, as soon as something feels AI-generated, people tune out. Everyone assumes security blog posts were written by ChatGPT, threat intelligence was auto-generated, and analyst explanations are just AI regurgitating patterns. This trust erosion hits security vendors hard because we have to prove there’s genuine human expertise and judgment behind the analysis, not just algorithms. Human insights are becoming the premium again. We need to shift toward human-in-the-loop models where analysts validate what AI flags, improve the machine learning through real-world feedback, and provide the QA layer that keeps automated systems honest. Proving you have humans in the loop (not just AI running autonomously) will become a competitive differentiator.”
The Great Expectation Reset in 2026.
“Security leaders in 2025 were promised that autonomous AI would solve their staffing problems, eliminate alert fatigue, and run their SOC overnight. That didn’t happen. Now there’s disillusionment and skepticism that we have to work through. The vendors who over-promised are creating skepticism that hurts the entire industry. We need to reset expectations in 2026 to what AI can actually deliver (which is still extremely valuable, just not magical).”
Threat Actors Will Migrate to Self-Hosted AI Models as Commercial Providers Tighten Controls.
“Everything we’ve seen documented about AI-powered cyber-crime (the North Korean IT workers, the ransomware operators, the fraud ecosystems) all depended on hosted AI services that could be monitored, detected, and shut down. That oversight is disappearing. When capable but unsecured models like DeepSeek become available for self-hosting with zero safety guardrails, criminals get unlimited AI assistance with zero detection risk. The last line of defense (provider-level monitoring) evaporates. The same dependency-driven operations we’re seeing now will continue, just invisible to defenders.”
Steve Petryschuk, VP of Product & Market Strategy at Auvik
“As networks grow more complex and distributed across cloud, edge, and on-prem environments, our manual configuration and troubleshooting workflows simply won’t scale. AI-driven network automation has been emerging as a ‘nice to have’ and will shift to become critical for network operations teams. Enabling admins to scale their networking knowledge and superpower using AI-driven network automation will drive more reliable, efficient, and resilient networks.”
Asdrúbal Pichardo, CEO at Squalify
Cyber insurance will enter the age of quantified risk.
“The volume of cyber insurance policies will increase by a double-digit percentage, driven by growing exposure associated with AI and a risk landscape increasingly shaped by mega-losses. Insurers are recalibrating underwriting models as AI accelerates both the scale and speed of potential breaches. Enterprises that quantify cyber risk in financial terms will gain leverage in negotiations, while those relying on qualitative assessments will face higher premiums and coverage gaps. The focus is shifting from coverage to provable resilience.”
Cyber resilience must be centralized—or it will fail.
“Multinational corporations are recognizing the necessity of centralizing cyber resilience management across their subsidiaries as regulations and threats become increasingly difficult to contain within individual entities. Fragmented governance models will collapse under the weight of overlapping mandates and complex supply chains. Centralized visibility into risk posture, incident data, and financial exposure is becoming the new baseline for compliance and continuity. In 2026, resilience will be measured not by localized response plans but by an organization’s ability to unify, quantify, and govern risk across its global footprint.”
Ariel Pisetzky, Chief Information Officer at CyberArk
The Law of Unintended Consequences Will Dominate the AI Conversation.
“In 2026, the law of unintended consequences will be a defining theme in organizational cybersecurity. The indeterministic nature of AI agents, the proliferation of machine identities, blurred lines of accountability, and the tension between velocity and security will necessitate that organizations innovate not just in technology, but also in governance and risk management. Success will depend on the ability to anticipate and mitigate risks that emerge from the rapid evolution of AI – ensuring that the pursuit of productivity and innovation does not come at the expense of security and trust.
“Organizations will confront a surge in autonomous AI agents, each with unique permissions, which will multiply identity security complexity and escalate privilege risks. Accountability will blur as AI agents make decisions with minimal human oversight, complicating responsibility for security breaches and operational failures. Attackers will exploit automated systems, embedding malicious prompts to trigger unintended actions that bypass traditional controls.
“The relentless drive for productivity will see AI agents rapidly deployed, often outpacing the development of robust security guardrails—leading to new vulnerabilities, data leaks, and operational disruptions. Traditional privileged access management will prove inadequate; organizations must adopt continuous monitoring and adaptive risk frameworks to safeguard privacy, security, and operational integrity. Even as the principle of least principle holds, the law of unintended consequences will prevail, necessitating innovation in governance and risk management to ensure that the pursuit of velocity and automation does not compromise trust and security.”
George Prichici, Vice President Of Products at OPSWAT
Files Are Evolving — Security Isn’t
“Security teams remain focused on productivity files such as Office documents and PDFs, in which embedded hyperlinks and encrypted content continue to pose real risks. But this focus can leave blind spots elsewhere. Today’s ‘file’ workflows increasingly include potentially malicious Python scripts and npm packages—many of which slip past traditional content inspection tools. Attackers are aware of this gap and are actively exploiting it.”
AI Is Expanding Attack Surfaces
“The enterprise rush to deploy LLMs is outpacing governance. Chatbots and AI copilots now handle sensitive data with little oversight. The result? A widening attack surface that includes data leakage, brand impersonation, and adversarial probing—a mix of insider risks and external threat actor delivery that will require stronger policy and technical controls.”
Mike Puglia, General Manager of Security Products at Kaseya
Expect major incidents by mid-2026.
“The industry will rally and overcome them. But here’s the paradox: while security threats escalate, AI is eliminating entry-level tech jobs. New graduates face a career ladder missing its bottom rungs. When the crisis hits, will we have enough defenders who know how to fight it? Attacks on SaaS infrastructure are exploding, and threat actors have shifted from targeting individual companies to the platforms powering entire ecosystems. Crack one widely-deployed firewall, and you’ve exposed one-eighth of the world’s networks.
“The real danger? Microsoft, Amazon, and Google control the backbone of global computing. A low-level breach in any of these could cascade into economic catastrophe. 2026’s lesson may be that cybersecurity’s biggest vulnerability isn’t technology – it’s concentrated infrastructure risk and a disappearing talent pipeline.”
Jason Rebholz, Advisory CISO at Expel
“We need to avoid falling into the trap of thinking that 2026 will be the year AI is battling AI. The current battle for defenders is getting the security basics right because offensive-focused AI will look for every misconfiguration, amplifying their risk. There are far too many companies that will continue to struggle to quickly patch an external device while attackers further automate existing attack playbooks to take advantage of that gap. In 2026, we won’t have more front doors to our systems and data. We just have more attackers who can jiggle the handle to see if it’s unlocked faster than we’ve seen in the past.”
Paul Reid, VP of Adversary Research at AttackIQ
Adversary supergroups will turn cyber-crime into a global franchise.
“In 2026, scattered crews will formalize into high-impact alliances, pooling tradecraft and partial intel to assemble complete intrusion playbooks. What looked harmless in isolation will become lethal in combination as crews fuse access methods, monetization paths, and victim insights. The result will be fewer, bigger, faster campaigns with higher success rates.
“Defenders will need to mirror this collaboration with real intelligence sharing and cross-team fusion centers. Watching a single signal will not be enough. Correlating ‘small’ anomalies across sources and environments will become the difference between catching a thread early and facing a full-scale breach.”
AI agents will become the most dangerous employees on your network.
“The rush to deploy autonomous agents, internal LLM endpoints, and convenience layers over data will expand access paths that attackers can quietly exploit. Shadow LLMs, poorly gated agent actions, and exposed orchestration servers will invite abuse, turning promptable automation into a live pivot point inside the network. The new battleground will not be the endpoint. It will be the agents you stood up to help your endpoints.
“Security teams will respond by gating agent permissions, enforcing explicit guardrails on tool use, and continuously testing agent behavior against real adversary techniques. If an agent can read it, write it, move it, or pay it, it must be validated like any other high-risk system before an attacker does it first.”
Cassius Rhue, VP of Customer Experience at SIOS Technology
Cybersecurity Will Redefine the Role of High Availability.
“The rising wave of cybersecurity threats is transforming how enterprises view HA clustering. In 2026, HA will not only be about achieving 99.99 percent uptime—it will also serve as a vital tool for maintaining security resilience. More organizations will use HA clusters to enable rapid, low-risk patching and updates, ensuring systems remain both highly available and protected against emerging threats.”
Frédéric Rivain, CTO at Dashlane
“As organizations increasingly deploy AI agents to handle tasks from customer service to code generation, threat actors are licking their chops as these autonomous systems are prime targets for cyber-attacks. Unlike traditional applications, AI agents have broad access to data, can make decisions without human oversight, and operate across multiple systems simultaneously, making them both valuable and vulnerable – a losing combo. In 2026, AI agents are going to come under attack. It’s up to security teams to address critical gaps, including zero-trust architectures extended to non-human identities and credential management for AI agents interacting with internal systems.”
Ashley Rose, CEO & Co-founder of Living Security
Security Training Will Go Personal, and Compliance Will Shrink to a Checkbox.
“Security training for compliance isn’t going away, but it will be acknowledged for what it is: an audit checkbox. The real shift will be everything beyond compliance becoming risk-based. I predict the demise of the one-size-fits-all annual security training. Instead, organizations will adopt tools that understand each employee’s behavior, reduce the time spent on required training for those who don’t need it, and meet people where they are to actually improve their security hygiene. Behavioral risk intelligence will replace generic awareness programs, providing security teams with a clear picture of who needs support before an incident occurs.”
CISOs Will Be Judged on Risk Reduction, Not Checkboxes.
“Boards are tired of hearing about training completion rates and phishing click metrics. Moving forward, CISOs will be measured on real outcomes: fewer risky users, better identity hygiene, tighter access, and faster response times. We’re going to see security leaders shift from ‘Did we train them?’ to ‘Did we reduce the risk?’ It’s a subtle shift, but it changes everything.”
Melissa Ruzzi, Director of AI at AppOmni
“True AGI (Artificial General Intelligence) may not be achieved before the next decade, but as GenAI evolves, it may be called AGI (which would then force the market to create a new acronym for the true AGI). The primary risk in AGI is similar to GenAI, where a focus on functionality can overshadow proper cybersecurity due diligence. By trying to make AI as powerful as it can be, organizations may misconfigure settings, leading to overpermissions and data exposure. They may also grant too much power to one AI, creating a major single point of failure.
“In 2026, we’ll see other AI security risks heighten even more, stemming from excessive permissions granted to AI and a lack of instructions provided to it about how to choose and use tools, potentially leading to data breaches. This will come from increased pressure from users expecting AI agents to become more powerful, and organizations under pressure to develop and release agents to production as fast as possible. And it will be especially true for AI agents running in SaaS environments, where sensitive data is likely already present and misconfigurations may already pose a risk.”
Stephanie Schneider, Cyber Threat Intelligence Analyst at LastPass
The Increased Risk of AI-Powered Malware
“In 2026, threat actors will increasingly deploy AI-enabled malware in active operations. As reported by Google’s Threat Intelligence team, when deployed, this AI can generate scripts, alter code to avoid detection, and create malicious functions on demand. Nation-state actors have used AI-powered malware to adapt, alter, and pivot campaigns in real-time, and these campaigns are expected to improve as the technology continues to develop. AI-powered malware will likely become more autonomous in 2026, ultimately increasing the threat landscape for defenders.”
Elad Schulman, CEO & Co-Founder of Lasso Security
Agentic AI will continue to surge in popularity, creating increasingly urgent security challenges.
“According to Gartner, approximately 40 percent of enterprise apps will embed task-specific AI agents by the end of 2026. These agents will operate with significant autonomy (e.g., negotiating APIs, triggering workflows, and acting on data with minimal human oversight), which will introduce a massive new attack surface. With agents operating as specific identities, making decisions on behalf of users, applications, or even other agents, there will be a fundamental need to ensure the authenticity and integrity of each agent’s persona or blueprint. We’re already seeing early movement in this direction, including federal initiatives around virtual identities to ensure they are formally managed and audited.”
Intent Security Takes Center Stage
“As autonomous AI agents become pervasive in enterprise and critical systems, the primary security concern will shift from data protection to intent security, ensuring AI systems act according to organizational goals and policies. Traditional defenses that protect data at rest or in transit will no longer suffice. By 2026, intent security will become the core discipline of AI risk management, replacing traditional data-centric security as the primary line of defense. Organizations that fail to monitor and align AI intent will face operational, reputational, and strategic risks at speeds far beyond what conventional cybersecurity can mitigate.”
James Shank, Director of Threat Operations at Expel
“Why change what’s working? Threat actors are going to broadly continue the lines of effort that are delivering results to them. That’s going to be the undercurrent of 2026: more of the same.
“Threat actors will develop new AI-driven threats and continue to exploit the weak state of identity verification and access controls. For targeted and high-profile attacks, threat actors are going to see a similar benefit from using AI as legitimate companies: a human-in-the-loop implementation where outputs are reviewed. For commodity attacks, AI for code and technique obfuscation may become more prevalent. While there are better tools available for these functions, AI lowers the barrier to entry and allows for cheaper implementation than the previously existing techniques, further driving adoption.”
Shrivu Shankar, VP of AI Strategy at Abnormal AI
A Landmark Breach Caused by AI-Driven Coding Assistants Will Trigger a Reckoning Among CISOs
“2026 will bring a landmark cyber-attack facilitated by AI-driven coding assistants (AI IDs) that will force a reckoning among CISOs. The shift from using traditional software to highly autonomous LLM agents creates a security paradox: agents are hungry for data and autonomy to reach a critical threshold of usefulness, yet this freedom introduces a potent new attack vector.
“The danger lies in engineers inadvertently copying unsanitized content from the internet into their AI IDE – whether that’s Claude Code or Cursor – allowing the agent to write and execute arbitrary malicious code with production access. Most security leaders overlook this risk, with a one-track mind focused on productivity. I predict this will be the catalyst for the next headline-grabbing breach and will reveal that the traditional security model – which trusts the user input and only audits the software vendor – has become obsolete.”
Defenders Will Win the AI Cyber Battle in 2026
“Despite widespread fears about generative and agentic AI leading to a massive increase in successful cyber-attacks, defenders will gain an edge over attackers in 2026 as it comes to AI use. While threat actors are using rudimentary AI to write code, today’s top-of-mind breaches remain minimally aided by AI. But this victory is precarious.
“Defenders must prepare for the next generation of attacks – namely, hyper-personalized social engineering campaigns – where AI, armed with deep insights from public data, crafts bespoke scams that are indistinguishable from legitimate communication. To maintain their lead, the industry must fundamentally pivot from scrutinizing attack content to analyzing behavioral anomalies, building security architectures that can spot unexpected patterns and deviations in communication history.”
AI Will Force Engineering Roles to Absorb New Functions
“AI augmentation will not just make individual engineers more effective, but it will fundamentally transform engineering workflows by empowering developers to absorb entirely new roles. As a result, product managers will become highly capable at drafting production-ready code, while engineers, augmented with customer and data analysis tools, will lead in defining product direction. This merging of capabilities will radically flatten the traditional product-engineering hierarchy, demanding a new approach to talent management that values T-shaped individuals and holistic product ownership over deep, singular specialization.”
2026 Will be the Year Humans Finally Stop Holding AI Back From its True Potential
“It’s a common misconception that AI models require significant advancements in reasoning and long-term memory before they can solve complex cybersecurity problems. The truth is that today’s AI models already possess high planning and reasoning capabilities for complicated tasks. Still, humans hold them back by forcing agents to use traditional, human-facing tooling. In 2026, we’ll experience a critical shift in focus: rather than waiting for ‘better’ AI, cybersecurity organizations will redesign their workflows and tooling to create AI-native scaffolding. This shift will involve building systems that deliver context and accept outputs in ways that are intuitive for the agent – not the human – which will in turn unlock the AI’s existing potential to handle complex, multi-step operations and solve problems that were previously thought too difficult.”
Jared Shepard, Chief Executive Officer at Hypori
Organizations move from device-centric mobile security toward mobile virtualization.
“Forward-looking enterprises are beginning to recognize a critical shift in perspective: they do not need to secure the device to secure the business. What truly matters is protecting enterprise data, applications, and identity, without inheriting the risk of unmanaged, employee-owned hardware.
“By 2026, we will see a decisive transition away from device-centric mobile security toward mobile virtualization architectures. Organizations will increasingly separate enterprise data and operations from personal devices entirely, delivering secure access without ever placing sensitive data on the endpoint. Compliance will no longer require hardware ownership. Security will no longer require personal intrusion. Privacy will be preserved by architecture, not policy.
“The future of secure enterprise mobility does not depend on managing the device. It depends on eliminating the device as a risk domain altogether. The real innovation is delivering a complete enterprise experience to any smartphone, securely, privately, and without trusting the endpoint.”
Abhi Sharma, Founder and CEO at Relyance AI
The Data Sovereignty Arms Race
“By mid-2026, every Fortune 500 company will have a Chief Data Sovereignty Officer. This role doesn’t exist today, but as nationalist policies accelerate, data location will become more critical than data security. Companies will struggle with proving where data is in real-time across AI models, cloud services, and SaaS platforms. The winners will be those who can instantly demonstrate data provenance and residency to multiple governments simultaneously. Traditional ‘privacy by design’ will be replaced by ‘sovereignty by architecture.'”
The Compliance Arbitrage Collapse
“The era of ‘build once, comply everywhere’ will die in 2026. Companies will split their AI and data operations into incompatible geographic instances for survival. We’ll see the emergence of “regulatory firewalls” where data flows that were seamless in 2024 become impossible by design. The uncomfortable truth: global interoperability will become a liability, not an asset. Tech giants will quietly begin developing region-locked AI models with fundamentally different capabilities, marking the Balkanization of artificial intelligence itself.”
Shadow Governance Becomes Real Governance
“By Q3 2026, board liability for data flow failures will surpass cybersecurity breach liability. The first major lawsuit will be about data moved rather than data stolen. Directors will face personal consequences for unauthorized cross-border data transfers, making data geography a fiduciary duty. This triggers a land grab for continuous compliance intelligence tools that can map every API call, every AI training run, every database sync in real-time. The market for ‘Data Journey Observability’ will explode from near-zero to a multi-billion dollar category overnight.”
Ravi Soin, CISO at Smartsheet
“Despite the rise of automation, the human element of security will come back stronger. The most resilient enterprises will invest as much in people as in platforms, training teams to think securely, share intelligence, and recover quickly. In a world where AI can scale attacks, the culture of openness and collective defense will be what truly protects organizations.”
Dmitry Sokolowski, Co-Founder of VOLT AI
“In 2026, privacy will become the litmus test for any AI security system. Deploying AI responsibly means focusing on behavior over appearance, never storing PII, and always keeping a human in the loop. Those that don’t will face backlash, erode trust and confidence, and be at a higher risk for gaps in safety.
“We’ll also see a drastic shift in physical security budget allocation and spending. It’s proven that the old ‘guards, gates, and guns’ mindset can’t keep up with today’s threats — not on campuses, not in corporate environments. Leaders will shift investment toward AI-driven infrastructure that provides people with better safety insights and eliminates the costly margin of human error.”
Jen Sovada, Public Sector General Manager at Claroty
The Reckoning of Fragmentation and the Rise of Collective Resilience.
“In 2026, organizations will recognize that stronger collaboration and shared intelligence are the keys to a more secure future. While fragmented cybersecurity oversight has exposed vulnerabilities in the past, it also creates an opportunity for public and private sectors to build truly resilient networks. Companies and agencies will increasingly coordinate, share insights, and adopt unified defense strategies. The year ahead will demonstrate that collective resilience is not only achievable but a source of strategic advantage, fostering trust, innovation, and confidence across sectors.”
Cybersecurity Moves from Reactive to Proactive.
“In 2026, cybersecurity will become a driver of innovation, growth, and strategic advantage. Organizations will move beyond reactive defense, leveraging AI, analytics, and automation to anticipate and mitigate threats before they occur. Expanding connected infrastructure will inspire faster modernization, continuous monitoring, and smarter risk management. Companies that embrace security as a core enabler will unlock new opportunities, strengthen stakeholder trust, and create a culture of proactive protection that fuels long-term success.”
Gregor Stewart, Chief AI Officer at SentinelOne
“AI models and tools can now handle a major portion of the procedural security work that humans currently do, and the challenge will become supervision rather than execution. Even when machines do the work, humans must remain responsible for the outcomes, but reviewing the output of, say, 1,000 AI agents is impossible with traditional alert-centric methods.
“The solution will be to find the ‘Goldilocks Spot’ of high automation and human accountability, where AI aggregates related tasks, alerts, and presents them as a single decision point for a human to make. Humans then make one accountable, auditable policy decision rather than hundreds to thousands of potentially inconsistent individual choices; maintaining human oversight while still leveraging AI’s capacity for comprehensive, consistent work.”
Vince Stoffer, Field CTO at Corelight
The Agentic SOC
“The Agentic SOC will look very different in one year. We won’t have all the kinks worked out, but the plumbing is starting to come together, and there is real promise that SOCs will be able to start offloading some of their tier-one analysis to agents. The integrations between data and platforms still need work. You can’t do anything without the best data.”
Post-quantum Cryptography (PQC)
“Post-quantum cryptography, or PQC, readiness is becoming more real. Federal requirements are driving massive focus on this in the federal space, but 2026 will be the year that we see more traditional businesses starting to key in on the importance of knowing what they have (inventory of crypto assets) and understanding how they will prepare for a post-quantum encryption world.”
Steve Stone, SVP of Threat Discovery and Response at SentinelOne
AI Will Be Embedded in Malware, and Detection Models Will Have to Evolve Just as Fast.
“LLM-enabled malware has already moved from proof-of-concept to practice. SentinelOne’s discovery of MalTerminal (the earliest known GPT-4–powered malware capable of generating ransomware or reverse-shell code at runtime), along with ESET’s PromptLock sample and emerging campaigns like LameHug and PromptSteal, show how attackers are experimenting with AI to create polymorphic, self-evolving payloads. These tools blur the line between code and conversation, allowing malicious logic to be generated dynamically and evade traditional signatures.
“In the coming years, language models and expansion into agentic AI capabilities will be a standard part of attacker toolchains, driving customized, environment-aware payloads, AI-augmented social engineering, and local LLM malware that runs entirely offline. Static detection will falter, and defenders will need to pivot to behavior-based analytics, model-based telemetry protection, and new methods for identifying intent rather than code. Today’s arms race in cybersecurity isn’t just AI versus humans; it will continue to be AI versus AI.”
State-Backed Cybercrime Will Look Like a Day Job.
“The line between criminal enterprise and state agenda is dissolved for DPRK. SentinelLabs’ research into North Korea’s IT worker network revealed hundreds of front companies, many of which operate out of China, and over 1,000 job applications from fake DPRK-linked personas attempting to infiltrate even major cybersecurity firms. These operations show how Pyongyang’s cyber workforce now blends sanctioned espionage with commercial freelancing, using legitimate hiring pipelines to fund the regime’s illicit programs.
“Soon we’ll see this model become the playbook for state-sponsored revenue generation: cyber operators posing as freelancers, consultants, or contractors across global tech ecosystems to quietly fund military and intelligence operations. The convergence of cyber-crime and statecraft means the next “insider threat” may not be an employee gone rogue, but a foreign government’s operative disguised as your next remote hire.”
Patrick Sullivan, CTO of Security Strategy at Akamai Technologies
Agentic AI will become the next major attack surface in 2026.
“As autonomous AI agents increasingly handle security tasks like triaging, writing API calls, and chaining systems, they’ll introduce unpredictable vulnerabilities. Expect attackers to aim to exploit AI agents’ access, scale, and autonomy to manipulate defenses or trigger harmful operations. Traditional API authentication and governance models will struggle to contain entities that can generate their own credentials or modify access paths on the fly. This shift will demand new disciplines such as AI behavior forensics or AI threat modelling to monitor and validate machine decision-making. In 2026, the balance between AI as defender or attacker will hinge on securing the AI agents themselves.”
A Major AI-Driven Data Breach Will Hit an Early Adopter
“Just as we saw in the early days of cloud adoption—when minor incidents quietly preceded a major, headline-grabbing breach—the same pattern is emerging with AI. So far, AI-related security incidents have been relatively minor, but in 2026, we’re likely to see at least one high-impact breach victimizing an early AI adopter. This event will serve as a wake-up call, much like the first major cloud breaches did, forcing organizations to confront and address the real risks of integrating AI into their operations.”
Karthik Swarnam, Chief Security and Trust Officer at ArmorCode
Connected OT and IoT Become the Top Cyber Risk Surface.
“We are reaching a point where connected OT and IoT systems represent the largest and most difficult attack surface to secure. These environments often cannot be easily patched or taken offline, yet they are becoming deeply interconnected with critical operations. In 2026, security teams must shift from trying to fix vulnerabilities post-issue to continuously assessing exposure and validating controls in real-time, without disrupting uptime.”
Eno Thereska, Co-Founder and Chief Executive Officer of Trent AI
Startups Will Announce Breakthroughs That Rival Big Tech
“In 2026, expect startups to publish and ship scientific-grade advances in AI and security, competing directly with the largest public companies. The talent tide is shifting: for the first time, top scientists are moving to early-stage companies, giving startups an unprecedented ability to deliver frontier-level research.”
“Vibe Coding” and Weak Security Tooling Will Drive Most Breaches
“Most security incidents in 2026 will stem from vibe-driven development practices: shipping AI-enabled systems without clear threat models, secure pipelines, or disciplined engineering.
The gap between rapidly evolving AI tools and stagnant security practices will create the perfect storm for breaches.”
Andy Thompson, Senior Offensive Cybersecurity Research Evangelist at CyberArk
Cognitive Overload and Security Decision Fatigue.
“With the proliferation of security prompts, multi-factor authentication, and constant threat warnings, employees will face unprecedented cognitive overload. In 2026, security ‘decision fatigue’ will become a measurable risk, as users tune out or ignore critical alerts, leading to more successful phishing and social engineering attacks. Organizations will need to invest in behavioral analytics and user experience design to streamline security interactions, reducing unnecessary prompts and focusing attention on the most critical actions. The most successful security programs will be those that minimize user friction and automate routine decisions, reserving human attention for genuinely high-risk scenarios.”
Patricia Titus, Field CISO at Abnormal AI
Agent-on-Agent Interaction Will Become the New Vector for Prompt Injection
“The rise of AI agents interacting with each other—and with humans—will create a highly attractive and potent new attack surface for prompt injection and advanced social engineering. As employees increasingly use trusted internal AI agents for drafting and managing emails, attackers will utilize their own AI agents to infiltrate and manipulate communications, inserting themselves into discussions. This agent vs. agent battle will exploit the high level of trust employees place in their seemingly secure corporate AI environment, leading to an increase in people unknowingly giving up “crown jewels” like bank routing information, or granting unfettered access through credential harvesting.
“Security awareness training programs need to evolve with these shifting attacker tactics and educate employees on each agent’s specific use case and what normal behavior of that agent is supposed to look like, so that employees can recognize and report when an agent’s actions deviate from the established guardrails.”
Private Sector and Think Tanks Will Fill the Gap Amid CISA Reductions
“In response to the current federal policy changes, budget cuts, and subsequent workforce reduction at CISA, private-sector organizations, nonprofits, and independent think tanks will step up to the plate and fill the resulting gaps, especially for high-target sectors like critical infrastructure. Similar to how states have recently collaborated to address their own healthcare policy challenges, governors and state CIOs will enter into new cybersecurity agreements, focusing on shared threat intelligence and joint security services to protect their critical infrastructure and educational sectors. The challenge will be for the private sector—like defense contractors and companies supporting critical infrastructure—to provide this vital support without commoditizing what should be a public safety service, and instead giving back to fill the void.”
Dominik Tomicevic, CEO of Memgraph
Why AI-Powered Graphs Are Essential Cybersecurity Tech
“As they correlate data across actors and attacks, not just individual behavior, with AI-enabled fraud on the rise, I expect more and more CISOs to appreciate that graphs are a natural fit for cybersecurity. Automated agents will act alongside humans, eventually handling specialized tasks independently, creating a massive, intelligent ‘bot’ defense workforce, and graphs would be a very flexible and powerful way to orchestrate that.”
Attila Török, CISO at GoTo
Ransomware, AI Risks, and the New Frontier of Enterprise Security in 2026
“As 2026 approaches, enterprises are facing a security landscape that is at once familiar and entirely new. Ransomware and operational downtime remain persistent threats, but the emergence of fake AI platforms and autonomous malicious agents adds a new layer of social engineering. CISOs will continue to evolve into strategic, cross-disciplined negotiators, balancing innovation, revenue objectives, and security across the business. They’ll also prioritize embedding secure-by-design principles into product development, strengthening third-party risk management, and building closer alignment between cybersecurity and business strategy. At the same time, how organizations manage customer data in AI tools will increasingly shape trust and compliance, making data stewardship a key competitive differentiator in the AI era. Together, these shifts signal a future where cybersecurity leadership is inseparable from business strategy, driving resilience, innovation, and trust as core pillars of enterprise growth.”
How CISOs Are Driving Resilience, Innovation, and Trust in the SaaS Era.
“In 2026, enterprises face a security landscape that is at once familiar and entirely new. Ransomware and operational downtime remain persistent threats. CISOs will continue to evolve into strategic, cross-disciplined leaders, balancing innovation, revenue objectives, and security across the business. Cybersecurity leadership is inseparable from business strategy, driving resilience, innovation, and trust as core pillars of enterprise growth.”
Sean Tufts, Field CTO at Claroty
OT Security Engineer Will Appear on Job Boards.
“OT security has not had a clear home. In most cases, a local network admin at each site is promoted to “handle security.” These individuals know the plant and automation systems but are not classically trained cybersecurity subject-matter experts. The growth of CPS security programs among our clients is creating a new role in organizations. The “OT Security Engineer” role will soon appear on Indeed and LinkedIn dashboards at top firms, seeking experts who can speak fluently about OT and IT. These roles will have seemingly impossible certification requirements, for example, a CISSP plus 10 years of PLC operations experience.”
Kinetic Warfare Will Continue to Push Critical Infrastructure Security.
“2025 marked a turning point, blending cyber operations into physical war plans. When Russia first rolled tanks into Ukraine, malware attacks on substations lagged by over 30 days. Modern militaries have closed that gap, making cyber the tip of the spear. As we saw in 2025, 2026 will continue to drive innovation, with critical infrastructure squarely in the crosshairs.”
James Urquhart, Field CTO and Technology Evangelist at Kamiwaza AI
Agentic AI Will Force Organizations To Re-Evaluate Their Access Control Models.
“Role-based access control and other human-defined access control models are ill-equipped for governance in an AI-driven world, especially with the rise of multi-agent systems. As autonomous agents interact and organize in unpredictable ways, security must shift from static permissions to behavior-based safeguards capable of detecting emergent patterns. By 2026, a wave of innovations will focus on dynamic security, including relationship- and attribute-based access control. By 2030, organizations that rely heavily on AI will need entirely new security frameworks to monitor agent behavior in real-time and contain risks from unplanned actions.”
Tony Velleca, CEO at CyberProof
Cyber-criminals will have a tactical advantage with AI.
“Cyber-criminals who use AI to their advantage don’t have to be as precise as defenders when leveraging the technology. Since the attacker doesn’t need to worry about getting everything right, they can innovate much faster than security organizations can in the corporate world. In 2026, attackers will have an increased tactical advantage—sophistication and social engineering tactics will evolve more quickly, making it difficult for defenders to catch up.”
Agentic AI adoption will increase in response to market demand, and security will struggle to keep pace.
“Agentic AI will continue to change the way enterprises need to think about identity access management and rights, and the way we need to think about security fundamentally. Agents are now acting as people too – they resemble the way we work with a human. Many people believe that the adoption of agentic AI will slow down, but we’re seeing this major shift happening at a rapid speed. Agentic AI will continue to evolve because the market demands it, but the security needs will always struggle to catch up with the speed and level of innovation.”
Agentic AI will expose weak cybersecurity foundations.
“Companies are adopting Agentic AI without ensuring a solid foundation. As we move into 2026, organizations need to trust both the quality and the security of their data, but far too many organizations struggle with this because they haven’t prioritized foundational estate management and won’t be able to utilize AI agents effectively. This will be pertinent for organizations looking to run AI agents to boost productivity and efficiency.”
Jeremy Ventura, Field CISO at Myriad360
Security Tool Recession
“Organizations will finally realize that more tools don’t mean more security. In 2026, the pressure from economic instability, national pressures, and resource constraints will drive a ‘security tool recession’— fewer siloed tools, deeper integrations, and stronger ROI expectations. Security consolidation will become survival, not just efficiency, as organizations trim redundant vendors and prioritize unified visibility across complex environments.
Data Residency Will Drive Decisions
“With rising geopolitical tension and new cross-border privacy regulations, 2026 will mark a turning point where data sovereignty becomes a design principle, not an afterthought. Nations and businesses alike will architect around local data zones, regional encryption controls, and sovereign cloud mandates. This will change how companies store, move, and secure information globally. Attackers will exploit jurisdictions where investigations are weakest. If data is stored in a region with ambiguous laws or lacking clear forensic access, an attacker may attempt to use that as a ‘safe harbor’ to slow or hide exfiltration.”
AI-driven Campaigns
“We’ll witness blurred lines between traditional cyber espionage and AI attacks: synthetic identities fueling disinformation, AI-assisted vulnerability research automating exploit chains, and deepfake-enabled social engineering operations targeting entire industries. Security teams will need to assume that every control, from phishing filters to network alerts, will face AI-optimized evasion techniques. This will force a renewed focus on stronger behavioral baselines and human interaction that is powered by AI-defensive-tooling that can adapt as fast as the threat surface evolves.”
Aimei Wei, CEO and Co-Founder of Stellar Cyber
The overarching theme for 2026 will be that agentic AI is reshaping the human-machine dynamic in security operations. For example:
- Hybrid SOCs: Automation will be used for acceleration in the SOCs, but humans will handle trust and governance. This will build the trust and data foundation required for future autonomous operation.
- Shift in roles: Analysts will become AI supervisors, validating autonomous SOC decisions, while automation handles routine tasks.
- Deception technology 2.0: Decoys will evolve into data-driven digital twins that learn attacker behavior through reinforcement learning, giving analysts proactive insight into threat intent.
- Foundations of “bot vs. bot” defense: Real-time attack correlation across hybrid SOCs will lay the groundwork for future true autonomous response.
Adam Weinstein, Founder and CEO at Writ
Short-term AI Wins in Exchange For Long-Term Security Concerns
“AI isn’t as compatible with traditional cybersecurity and data protection practices as many would’ve hoped; it prefers open access to information. To accommodate this, CIOs will forgo more traditional security protocols and procedures to provide AI with the data it needs to provide optimal responses. Adam thinks that while AI integration is important, companies should proceed with security in mind to prevent a major security breach and/or failure, which may become far more common in 2026.”
Chris Wheeler, CISO at Resilience
Third-party risk will dominate headlines in 2026.
“We’ve reached a point where many companies have tightened their internal controls and improved overall resilience, but their sensitive data still lives in large, centralized platforms (e.g., cloud providers, DBaaS, CRMs). Even if individual organizations are more resilient, their partners and vendors may not be, and that exposure will drive some of the biggest incidents next year.”
On CISO roles.
“2026 will see the rise of the risk-first CISO who is both a business strategist and a technical expert. Those CISOs can translate technical threats into business terms, weigh tradeoffs, and drive decisions that balance innovation with protection, will be the ones to define what modern cybersecurity leadership looks like.”
On AI threats.
“2026 will be the year we see the first meaningful breaches tied directly to AI: not attacks assisted by AI, but incidents that exploit AI adoption, which has accelerated due to organic initiatives and vendor integration. Security tooling to protect these workflows is either in its infancy or prohibitively expensive, which creates opportunity for mistakes and misuse, especially downmarket.”
Kyle Wickert, Field Chief Technology Officer at AlgoSec
Shadow cloud will become the biggest security gap, and the solution isn’t more tools.
“The rise of shadow cloud environments—stemming from cloud infrastructure spun up by business analysts or project teams outside of formal IT oversight—will increase visibility and governance gaps in 2026. This opens organizations up to risks and is a product of the rapid digital acceleration that is becoming increasingly common as companies expand across public cloud, on-premises environments, and SD-WAN. Infrastructure is essentially growing faster than security teams can inventory it, but the real problem isn’t a lack of technology; it’s that most organizations still lack a cross-domain view showing everything that touches an application.
“As more organizations come to understand this problem, teams will shift to automatic onboarding, unified policy governance, and intelligent automation as a way to identify and model new pieces of infrastructure as soon as they appear. When every new cloud account, platform, or workload is immediately discovered and brought under governance, threats outside of a company’s sightline shrink significantly.”
Security Teams Will Shift From Building Policy to Vetting Automated Decisions.
“As intelligent automation matures from an efficiency tool into a governed, risk-reduction solution, security engineers will transition away from manually building policy and focus on vetting and guiding machine-generated decisions in 2026. Organizations are already moving toward ‘self-healing policies,’ where automation identifies overly broad rules, unused access, and unnecessary exposure before tightening them automatically. This model shrinks the attack surface continuously, not just during scheduled audits.
“However, meaningful automation can’t exist without governance. AI must be framed by guardrails, risk-aware workflows, and extensive network understanding earned over decades. Adopting a ‘trust but verify’ model is a wise idea, enabling automation to handle the heavy lifting while humans verify intent, ensure safety, and apply judgment where context matters. This phase will reshape security professionals’ roles as they become more involved at the strategic level.”
Steve Woo, Distinguished Inventor and Fellow at Rambus
Post-quantum cryptography (PQC) will be integrated into products and networks.
“With NIST’s PQC standards finalized (FIPS 203/204/205) and NSA’s CNSA 2.0 timelines pressing national systems to migrate, vendors began enabling hybrid Kyber/ML‑KEM in TLS across browsers and CDNs. By 2026, PQC‑ready libraries, device stacks and policies are expected to be standard—and RFPs will start requiring crypto agility and PQC roadmaps. Short‑term friction (larger TLS ClientHello, middlebox breakage) is being resolved; Cloudflare, Google, and others have switched to ML‑KEM + X25519 in production.”


