Ad Image

Identity Security Predictions from Industry Experts for 2026 and Beyond

Identity Security Predictions from Industry Experts for 2026 and Beyond

Identity Security Predictions from Industry Experts for 2026 and Beyond

As part of this year’s Insight Jam LIVE event, the Solutions Review editors have compiled a list of predictions for 2026 from some of the most experienced professionals across the Identity Access Management (IAM) and broader identity security marketplaces.

As part of Solutions Review’s annual Insight Jam LIVE event, we called for the industry’s best and brightest to share their IAM and cybersecurity predictions for 2026 and beyond. The experts featured represent some of the top solution providers with experience in these marketplaces, and each projection has been vetted for relevance and ability to add business value.

Identity Access Management and Security Predictions for 2026 and Beyond


Michael Adjei, Director, Systems Engineering at Illumio 

Cyber-criminals will weaponize agentic AI to commit identity-based attacks.

“Depending on how people use agents, they are, in a way, relinquishing part of their identity to autonomous AI. Agents will assume people’s identities, accessing usernames, passwords, and tokens to log in to systems for automated convenience. In 2026, cyber-criminals will target the autonomous capabilities of agentic AI and exploit them to commit cyber-attacks by compromising agent-to-agent communication. This approach could make agents appear culpable in potential mass exploitation incidents, allowing the true attacker to remain concealed in the shadows. Agentic AI’s novelty, paired with overlooked security and continued mass adoption, will likely fuel this trend. This will force organisations to rethink identity, access, and accountability in a world where machines act faster, and more dangerously, than humans ever could.”


Jan Bee, CISO at TeamViewer

Password-based authentication will finally become obsolete in organizations.

“While compliance frameworks continue to mandate complex password policies, forward-thinking organizations will abandon passwords entirely in favor of platform authentication and biometric systems. The password requirements that made sense a decade ago are now actively holding back security progress. In 2026, we’ll see a clear divide between organizations that cling to outdated password mandates and those that embrace passkeys, platform authentication on managed devices, and biometric verification as their standard.

“CISOs should begin planning the complete elimination of passwords from their authentication workflows. Focus on platform authentication that verifies managed, compliant company devices, combined with biometric authentication. This isn’t just more secure—it’s dramatically more user-friendly, eliminating the frustration and security risks of password management. Yes, some compliance frameworks still emphasize passwords, but these requirements are outdated by the current threat landscape. Security teams should collaborate with their compliance teams to demonstrate how modern authentication methods surpass the security intent of password requirements, even if they don’t strictly adhere to the letter of older regulations. The organizations that make this transition in 2026 will be significantly ahead of their peers in both security posture and user experience.”

Identity will replace the network perimeter as the primary security boundary.

“The concept of a network perimeter is effectively dead, yet many enterprises still haven’t fully embraced identity as their new security foundation. In 2026, organizations will finally recognize that Single Sign-On (SSO) isn’t optional—it’s fundamental. The refusal to implement SSO across all enterprise applications will increasingly be seen as a critical security failure rather than a vendor management decision. Identity must be secured end-to-end, from the employee account through every application and integration point.

“Enterprises must view identity management as the starting point for all security initiatives, not an afterthought. Implement SSO across the entire application portfolio without exception. Beyond SSO, establish transparency around supporter and administrator identities within the tools—employees should be able to clearly verify who is connecting to their systems, including names, email addresses, and company affiliations. This creates the trust framework necessary for secure operations across organizational boundaries. The gap between having these solutions available and actually implementing them is where most security failures occur. The technology exists; closing the implementation gap must be the priority for 2026.”


Kevin Bocek, SVP of Innovation at CyberArk

Identity will be the critical ‘kill switch’ for runaway agents.

“A runaway agent will cause the next major identity-based breach in the AI era. As teams rush to use Model Context Protocol (MCP) to connect agents to critical systems, these agents are being configured by engineers who are not identity experts. It’s only a matter of time before an API key is leaked or a malicious prompt tricks an agent into an unauthorized action, causing a single, corrupted agent to spread across the network.

“As organizations cannot pull a physical plug on these agents, the most significant security threat is a “runaway agent” executing unauthorized work across intercommunicating workflows. In 2026, the wake-up call for the world will be the realization that the essential “kill switch” for an out-of-control agent isn’t a power cord; it’s the ability to revoke its identity instantly as part of lifecycle governance.”

Shrinking certificate lifespans will lead to an ongoing game of whack-a-mole for security teams.

“The shortening of TLS certificate lifespans will trigger a wave of crippling, ongoing machine identity-based outages. While the intent of this change by Google, Microsoft, and Apple is to improve security, the new reality will become a continuous, painful exercise in whack-a-mole for security teams who will regularly need to scramble to put out fires caused by manual certificate management. Starting in March 2026, when certificate validity is reduced from 398 days to 200 days, we’ll see a cascading set of events where forgotten or mismanaged certificates expire, causing critical systems to go offline.

“A digital certificate is a machine’s identity. When it expires, the machines can no longer communicate, creating a fundamental breakdown of trust that will cripple everything from baggage handling systems at airports to bus schedules and ATMs. What makes this so much more impactful than a single software outage is that it’s not limited to a single vendor or piece of software. It’s a problem for every business and government worldwide, and organizations that still rely on spreadsheets and manual tracking will be caught completely off guard. This looming digital tsunami is not a question of ‘if’ but ‘when,’ and its far-reaching, long-tail impact is set to hit every business and government globally in 2026 and beyond.”


Ellen Boehm, SVP of IoT and AI Identity Innovation at Keyfactor

You can’t secure what you can’t identify–especially AI

“As we move into 2026, AI will no longer just assist; it will act. Agentic systems will make decisions, initiate transactions, and connect directly to sensitive data and infrastructure. Each of these AI agents now represents a new kind of identity that must be authenticated, managed, and trusted. Without verifiable digital identities, we lose visibility into who or what is acting within our systems.

“Right now, many organizations are eager to show value from agentic AI projects, and in the process, they are cutting dangerous corners on security—the same way they did with the emergence of IoT devices. Giving unchecked access to an AI system is like handing over the keys to your network without knowing who’s driving or where they’re going. Yet security, as always, tends to be an afterthought, and that’s exactly what will be exploited.

“In 2026, enterprises will realize that securing AI isn’t just about protecting data; it’s about establishing trust in the machines themselves. As agentic AI proliferates, every AI agent must have its own cryptographic identity, enforced through certificates and mutual TLS. The organizations that lead in 2026 will be those that build identity into the DNA of AI, creating systems that are not only intelligent, but inherently trustworthy.”


Mike Britton, CIO at Abnormal AI

The Deepfake Reckoning 

“I think 2026 is the year deepfakes really hit the mainstream, and not in a good way. We’re right on the edge of real-time and totally convincing deepfakes that you won’t be able to tell apart from reality. This will include video calls, audio, and even live conversations that look and sound just like those with your CEO or finance lead. Attackers will use this for social engineering and multi-stage scams, where a single fake video call could result in significant financial losses.

“What will make this more of an issue is that regulation won’t be able to stop it or keep up. Laws like the upcoming AI Act are built for companies that follow rules, not for criminals. Bad actors are already running their own models locally, completely outside the guardrails. So, the big question for next year isn’t how we stop deepfakes from existing, it’s how we learn to verify what’s real when anything can be faked.”


Phil Calvin, Chief Product Officer at Delinea

Machine identity sprawl will reach a tipping point.

“If 2025 was the year of AI, 2026 will be the year of agentic and machine identity sprawl. Machine identities—from workloads and service accounts to IoT devices—now vastly outnumber human identities in the enterprise, and most operate unseen and overprivileged. Machine identities are already the primary source of privilege misuse, creating an urgent need to expand threat identification coverage to non-human accounts. The rapid growth of AI-driven systems and the explosion of connected IoT ecosystems will push organizations beyond their ability to track and manage machine identities effectively, creating prime opportunities for attackers to exploit unmanaged or forgotten identities. 2026 will force security teams to confront the reality that identity-first security can’t stop with humans.”


Tim Chase, Field CISO and Principal Technical Evangelist at Orca Security

“Quantum readiness is going to become a real planning problem: In 2026, CISOs are going to be asked to show what their organizations are doing to prepare for post-quantum cryptography. We are already seeing early moves from major cloud providers, who are beginning to test quantum-resistant ciphers within their core services. With no clear agreement on which algorithms can endure true quantum computing power, organizations must prepare for change without full visibility. That means identifying assets at risk from outdated encryption and gauging the complexity of unwinding those dependencies. The companies that start this inventory and planning work early will avoid a far more expensive and rushed migration later.”


Baptiste Collot, Co-Founder and CEO of Trustpair

Volatility and AI industrialize fraud, and identity becomes the new control surface.

“In volatile markets, fraud accelerates, and 2026 will be no exception. When companies rush to onboard new suppliers or reroute payments under pressure, controls fall behind. What’s different now is the role of AI. Fraudsters are already using AI to scale impersonation and deception: AI-generated vendor emails, deepfake signatures, synthetic supplier identities, even automated bank-account change scams.

“Fraud is becoming faster, cheaper, and harder to detect, which shifts the battleground to identity. Verifying who a supplier really is—and who actually owns the receiving bank account—will matter more than invoice-level checks. Identity validation will become the new control surface for every payment process.”


Anthony Cusimano, Chief Evangelist and Director of Solutions Marketing at Object First

Quantum computing will NOT become a security concern in 2026.

“Even with recent headlines about quantum breakthroughs, the threat of quantum computing remains years, if not decades, away. The real and present danger lies in AI-driven threats that are already operational: Polymorphic malware, deepfake-enabled fraud, and data poisoning attacks are actively compromising systems today. While it may be wise to begin exploring post-quantum cryptography, organizations should still prioritize immediate, proven defenses like Absolute Immutability to protect their backups against the threats that are already active.”

Ransomware response will shift from prevention to resilience in 2026.

“Many organizations’ cybersecurity strategies are still falling short by over-relying on prevention, detection, and response tools that are inherently reactive and increasingly ineffective against AI-generated threats such as phishing, deepfakes, malware, and data-poisoning. In the new year, we’ll see a growing emphasis on recovery strategies like immutable backups and Zero Trust architectures as organizations realize that early detection has become unreliable, and prevention, detection, and response tools alone are insufficient.”


Floris Dankaart, Lead Product Manager at NCC Group

“In 2026, identity for ‘headless’ devices will become more mature – e.g., in an IoT or OT environment, offering additional defensive capabilities. Expect identity governance to (slowly) extend beyond people to include device identity attestation, cryptographic binding, and lifecycle management for IoT and OT endpoints.”


Jacob DePriest, CISO/CIO at 1Password

AI Agents Will Redefine How We Govern Access.

“As AI agents make our teams more productive, the next phase of identity and access management will shift from visibility and control to focus on how security teams govern agents. As humans grant AI agents increased access to data and systems, both in personal and corporate settings, security teams will need to track this activity across identity, endpoint, and data protection surfaces. They will need to handle agents operating with employee permissions, differentiate their actions from humans, and control the credentials granted to them. Those who gain visibility and control of agent access will set the standard for trusted AI ecosystems.”

AI Discovery & Audibility Will Be One of CISOs’ Top Challenges

“As AI applications and agents gain acceptance across enterprises, big and small, it will get harder for security teams to maintain a clear picture of the activity and actors inside their organization. CISOs will be responsible for new connections, data sharing, and actions that originate from non-human actors, forcing them to rethink what visibility and accountability mean. Was it a human or an agent that acted? Who is responsible for the actions an agent takes, and which sensitive data and systems are involved? Those who can attribute, audit, and govern AI-driven actions will maintain accountability and trust as autonomy expands.”


Ravi Ithal, Chief Product and Technology Officer of AI Security at Proofpoint

AI Agents Will Become the New Insider Threat.

“By 2026, autonomous copilots may surpass humans as the primary source of data leaks. Enterprises are rushing to roll out AI assistants without realizing they inherit the same data hygiene issues already present in their environments. Over-permissioned SharePoint folders, unclassified documents, and outdated access rules will allow these copilots to surface sensitive data to users who were never meant to see it.

“These agents are not simply tools; they will become identities in their own right, with each one carrying a trust score, behaving as a peer actor in the ecosystem. The old model of phishing will be replaced by “prompt paths,” or avenues through which an agent is tricked or misled into extracting and exposing data. Security teams will no longer focus solely on human actors; they will be forced to treat their AI agents as first-class identities, managing their privileges, monitoring their behaviors, and scoring their risks.”


Ashish Jain, CTO at OneSpan

As AI continues to automate social engineering, bad actors will deploy hyper-realistic deepfake voice and video attacks at unprecedented speed and scale.

“Evolving generative models will easily outpace even the most forensics-driven defenses, rendering traditional detection approaches obsolete. In this new reality, trust can no longer rely on human perception. Our strongest senses of vision and hearing are now our greatest vulnerabilities.

“The future of digital trust will hinge on verifiable authenticity, anchoring every interaction in strong, phishing-resistant authentication and verification. Expect a wave of new reusable-identity use cases as platform giants continue to accelerate their efforts—from Apple’s push for mobile driver’s licenses to the EU’s expansion of its digital wallet rollout. In 2026, security will depend on calibrating deterministic and probabilistic signals to continuously confirm not just who someone claims to be, but that their identity itself is genuine—transforming digital trust from a reactive process into a built-in guarantee.”

Agentic AI is ushering in a new class of digital interactions where bots act on behalf of humans.

“The static API-to-API world was predictable, but agent-to-agent workflows demand continuous negotiation of trust, intent, and authority. Traditional authentication, designed for people, cannot yet distinguish between “good bots” and ‘bad bots,’ nor can it manage the surge of machine-initiated requests. In 2026, organizations will require a more dynamic trust stack that integrates authentication, verification, and fraud detection. Success will depend on real-time identity intelligence that ensures every digital agent is verified, accountable, and authorized before it acts.”

While passkeys have steadily gained traction as a security measure, they have yet to fully replace passwords in most contexts. In 2026, that will change.

“Passkeys will reach an inflection point in 2026, shifting from ‘nice to have’ to non-negotiable as AI-driven attacks surge. Detecting every phishing attempt is no longer possible, so the security focus must turn to modern, standards-based authentication. Passkeys enable that shift, anchoring digital identity to cryptographic proof. The real opportunity lies in making authentication continuous and adaptive, extending beyond login to secure every action, transaction, and approval. Digital identity will become even more critical, and the organizations that lead will make trust invisible to users but impenetrable to attackers.”


Darryl Jones, Vice President of Product (CIAM) at Ping Identity

AI Will Reshape Trust and Discovery, From Detecting Deepfakes to What We Buy

“We can no longer trust our eyes or ears due to the rise of easy-to-access deep fakes and AI threats. It’s now more critical than ever to ensure the person is who you expect on the other side of any digital interaction. Good AI can help us understand and detect when things go awry—and also improve customer experience without sacrificing security.

“AI will also be there as we make purchasing decisions, guiding our path through the internet based on our prompt. In the end, AI and agents may decide which products to show us, even if, for example, they are on page 500 in a Google search. This will unlock new opportunities to find exactly what we are looking for, but also new and better ways that businesses and consumers can connect.”

Trust in AI Will Depend on Radical Transparency

“Brands will need to be radically transparent about how AI shapes consumer experiences, explaining not just what is being recommended, but why. Trust will depend on giving customers clear control over their data and personalization settings, ensuring AI decisions are explainable, unbiased, and privacy-respecting. Companies that combine intelligent automation with visible accountability, letting identity, consent, and ethics drive personalization, will stand out as the most trustworthy in an AI-powered world.”


Paul Laudanski, Director of Security Research at Onapsis

In 2026, deepfakes and adaptive malware will make it nearly impossible to trust what we see online.

“Executives and employees will face sophisticated impersonation and hyper-personalized attacks in everyday situations like job interviews. AI-powered tools will probe enterprise systems for vulnerabilities in ways even seasoned security teams struggle to get ahead of. At the same time, with so many AI-branded ‘solutions’ flooding the market, organizations will intensify scrutiny to validate which tools deliver real value. As AI blurs the line between real and synthetic, both AI-driven deception and product claims will become harder to trust.”


Dwayne McDaniel, Developer Advocate at GitGuardian

Workload Identity Will Take A Front Seat.

“While moving away from password and API key-based access has been part of many security and scalability conversations for the last few years, 2026 will be the year when we move from asking ‘Does this entity have the right key?’ to asking ‘What is this entity’s identity, do we trust that, and what behavior do we expect from it?’ Federated workloads are nothing new; we have seen CNCF projects like SPIFFE/SPIRE and IETF working groups like WIMSE mature to help enterprises address the challenges of authentication at scale. The technical complexity of these solutions and what is seen as a limited ROI on replacing long-lived secrets has kept these technologies as mainly ‘nice to haves’ for most organizations.”

“But now, we are entering a world of agentic AI. Now that we are finally in a position to start conceptualizing workloads as entities making decisions on our behalf, solutions are emerging rapidly to lower the barrier to entry for an identity-first approach. No matter which algorithm is running on which cluster, we are realizing that every workload, human, and agent in our environments must have a clear, provable identity.”

We Will Hit More Roadblocks Around Agentic AI

“While Agentic AI is going to drive identity governance forward in 2026, the reality of how we can use AI, and generative AI in particular, is going to meet the hard reality of token costs and the effects of the worsening hallucination problem.

“While many vendors focus on the values and virtues of LLMs, the reality is that most users’ experience with these magic black boxes leads to different kinds of work and issues. While there are a lot of amazing uses of algorithms and predictive models, the promise of infinitely wide use of generative AI everywhere is going to fade away, as we see research condense into areas of code generation, predictive sorting, and triage (especially in high-volume transaction environments, like the SOC and help desk), as well as summarization and translation use cases.

“Once we accept the limitations of generative AI as a set of very specific use cases, I predict that 2026 will be the year teams embrace small language models. This is how token costs factor in. While enterprises love to add AI bots to almost any interface they can, the logic has been that we can ignore the thousands of dollars a month in token use in the short term, as we will gain enough market share and strategic advantage to outweigh those expenses. But, as MIT reported this year, 95 percent of deployments have no ROI, meaning exec teams are about to start taking LLM costs at least as seriously as they do AWS or other cloud infrastructure billing.”

We Will Find That Post-Quantum Cryptography Is Not Free.

“When was the last time you thought about the network costs of TLS? Unless you are working in a very high-throughput system where you are measuring and reducing microseconds to squeeze out value, the chances are you have taken for granted that SSL/TLS is essentially a free-to-run, lightweight process. That indeed feels true currently, but that is directly because current keys and their signatures are measured in bytes. Not GB, MB, or KB, but bytes. Until now, thanks to the magic of trap door functions and the limitations of hardware, lightweight cryptography, in the form of RSA keys, has proven safe and effective, keeping attackers away from our data at rest and in transit. But we are on the cusp of breaking RSA with quantum computing.

“The good news is that many competing standards are already emerging, using different levels of math and orders of magnitude of complexity, which we generally call post-quantum. No matter how many qubits we can ever put on a chip or in an arrangement, it does not seem mathematically possible to defeat lattice-based cryptography. At least we can’t foresee it right now.”


Ram Mohan, Chief Strategy Officer at Identity Digital

In 2026, creative domain extensions will define the future of digital identity.

“Today, every brand wants to be an AI company, and owning a .ai domain has become the ultimate signal of that ambition. Just as legacy domains once defined legitimacy in the early internet era, .ai has become synonymous with innovation, intelligence, and modern credibility. From scrappy startups to global enterprises, organizations are building entire brand identities around .ai, where the domain itself embodies the technology powering their growth. By using a .ai domain, brands not only validate their AI focus but also communicate it, blending technology with identity. The same shift is driving momentum behind domains like .studio, .pro, and .fyi, where form and function meet cultural relevance.

“This evolution marks a turning point in digital behavior: from availability to identity. Legacy domains were often secured based on availability. The new TLDs are secured based on relevance, meaning the domain extension itself becomes part of the marketing message. With platforms now combining registration, AI-generated websites, logos, and marketing assets, the traditional ‘buy a name, then build later’ model has transformed into a seamless, intelligent brand-creation workflow. In this new landscape, creative domain extensions are not just labels; they are the architecture of digital identity.”


Matt Overman, Chief Revenue Officer at Identity Digital

Domains will become the center of an AI-built world.

“AI has prompted a major shift in how we interact online, and its ability to streamline technical workflows will bring a new era for web development in 2026. Non-technical entrepreneurs and creators will be able to test ideas and launch businesses at an unprecedented rate. As entrepreneurs and creators harness AI tools to build more quickly, there will be an increased prioritization of brand identity.

“As a result, personalized domains will play a larger role in business strategy than ever before. Domains will be used as proof of authenticity and ownership in a rapidly evolving digital landscape. A unique domain extension will serve as a badge of credibility and a cornerstone of verified personhood, particularly as AI-generated content blurs authenticity. Domains will be the critical element that connects real people, real brands, and verifiable sources.”


Kirsty Paine, Field CTO & Strategic Advisor at Splunk

In 2026, deepfakes will further infiltrate the workplace due to “parasocial engineering.”

“In 2026, deepfakes will go deeper than most people expect, from impersonating celebrities and politicians to entering everyday workplace interactions. In a world built on remote meetings, Slack messages, and broadcast-style leadership communication, deepfakes will be used to manipulate the one-sided emotional bonds we form with people we “know” from screens – an effort I describe as ‘parasocial engineering.’ In the next year, AI-powered social engineering campaigns will be used to spoof executives, trusted vendors, and partners with alarming precision.”


George Prichici, Vice President Of Products at OPSWAT

Trust Is Emerging as a Primary Vulnerability

“Third-party vendors and ‘trusted’ integrations remain soft targets. CISOs are realizing that focusing budgets solely on endpoints, identity, or edge security creates an imbalance: a fortified front door, but an open side entrance. The path forward is not zero trust for everything, but smarter, consistent processes that elevate defenses across all channels, including partners, APIs, and supply chains.”


Kevin Quigley, Director of Process Improvement at Wiley

“Over the last 2 years, we have seen a shift toward increasingly agentic AI solutions, with AI agents evolving from research experiments to tools that drive material value. As AI agents become reliably capable of taking on more complex tasks, multi-agent systems will become a common part of how work is done. We will also see AI agents being used in more creative ways and in new domains, facilitating scalable automation that would not have been feasible otherwise. In contrast, we will see a decrease in redundant “information search” agents as standard solutions emerge for the most common use cases and specialized solutions focus on industry-specific needs.”


Alex Quilici, CEO of YouMail

AI Supercharges Voice Scams (Including the Ones That Sound Like You).

“Scammers used to need big call centers to run large-scale fraud, but not anymore. AI will handle it all. Generative tools will write customized texts, voice scripts, and emails, and even respond to victims in real-time. That will make scams faster, cheaper, and harder to trace. We’ll move from most robocalls connecting someone to a person to most robocalls connecting someone to an AI bot, at least at first. The good news is that the same AI techniques used by bad actors can also be used to detect patterns, flag impersonation, and shut down fraud at scale (if companies are proactive).”

Scammers Will Master the Art of Sounding Legit.

“Every text, call, or voicemail will deserve a second look. In 2026, scam calls will sound more real than ever. Fraudsters are now using cloned voices that match your bank’s virtual assistant or even your favorite delivery service’s tone. You’ll get a call from what sounds like your bank, with the right voice, the right number, and a perfect script. Only it isn’t them. Scammers will use these ‘audio twins’ to trick people into confirming account info or sending money. We’ll start seeing these ‘brand-clone scams’ show up everywhere, from credit cards to delivery updates. The only defense is verification outside the call itself. If your gut says something’s off, hang up and call the official number.”


Ramprakash Ramamoorthy, Director of AI Research at ManageEngine

“In 2026, the age of simple MFA and once secure facial scans is officially over. We are about to witness a serious security reckoning for organizations, stemming from a dangerous unpreparedness gap: our survey shows 37% of companies still haven’t even defined formal guidelines for AI use at a user level. Attackers will weaponize deepfakes to impersonate CEOs and CXOs for high-value transfers, and deploy synthetic identities to infiltrate networks through weak onboarding. If your identity defense isn’t using real-time AI-driven anomaly detection, liveness, and behavioral biometrics, you are an inevitable target for this new generation of sophisticated identity attacks.

“Furthermore, the biggest, stealthiest threat in 2026 won’t be human—it will be the ‘Ghost Identity’ problem: dormant, abandoned AI and automation bots that still hold sensitive API keys, credentials, and excessive privileges. As IT leaders rush to deploy automation, they are inadvertently creating a massive, unmanaged threat surface. I anticipate the first large-scale breach caused by a rogue, forgotten bot, likely in the financial services sector, as attackers target these exposed API keys to gain discreet, persistent access. Machine identities are rapidly becoming the single most critical threat surface.”


Romanus Prabhu Raymond, Director of Technology at ManageEngine

“While human error and phishing will always be a factor, the strategic danger in 2026 will come from unmanaged AI agents with administrative privileges. They represent the fastest-growing and least-governed attack surface. Our focus on securing the human user has created a massive blind spot: we are missing consistent, automated lifecycle management and privilege governance for these autonomous machine identities. The industry must realize that if you can’t govern the agent’s life, you can’t control its death, and an orphaned AI agent is a high-powered, ticking time bomb.

“Beyond AI, in 2026, the complexity introduced by overlapping global identity mandates will no longer be a mitigating factor—it will be an accelerator of risk. IT leaders are already struggling to balance security and the demands of regulations like NIS2 and DORA. The pressure cooker environment will inevitably lead organizations to selectively disable or bypass key security controls like high-assurance Multi-Factor Authentication (MFA), not because they don’t value security, but due to sheer compliance fatigue and interoperability gaps between systems. Ultimately, compliance sprawl is creating a gap where organizations may be legally covered but technologically vulnerable.”


Ashley Rose, CEO & Co-founder of Living Security

“Human Error” Won’t Be the Villain Anymore.

“We’re finally going to stop blaming employees for every security incident. With AI tools revealing why people make mistakes (confusing workflows, overly complex approvals, poor user experience), companies will realize that most ‘human error’ isn’t a result of people not caring about security. It’s that the system around them is reactive, not predictive. Leaders will start redesigning processes and reducing friction, rather than handing out more training. Teams that make this shift will see fewer missteps, fewer identity-based incidents, and happier employees.”

AI Will Find Identity Weak Spots Faster Than IT Can Patch Them.

“This is the year AI starts discovering identity flaws at a speed no human team can keep up with. Attackers will use AI to map weak authentication, stale permissions, and risky accounts in minutes. This will push companies into continuous identity hardening instead of quarterly cleanups. By the end of the year, identity hygiene will be a core board metric, not an IT chore.”

Humans and AI Will Finally Be Managed as One Workforce.

“AI agents won’t feel like just ‘tools’ anymore, they’ll act like coworkers. Companies will have to define expectations, access policies, and performance rules for both humans and AI agents. Organizations that try to manage the two separately will end up with gaps, blind spots, and some awkward surprises. The smart ones will merge them under one unified workforce risk model that makes oversight much clearer and effective.”


Vivin Sathyan, Sr. Technology Evangelist at ManageEngine

“I anticipate that while many organizations have initially bolted AI capabilities onto existing legacy identity systems, a critical mass will quickly recognize this is a strategy built to fail in 2026. The problem isn’t just budget; it’s architecture. When companies plan to augment rather than modernize, the critical capability they’ll be missing a year from now is holistic AI-native identity governance and automation.

“Simply bolting AI onto a legacy stack lacks the depth for proactive risk management, adaptive access controls, and continuous authentication. Forward-thinking leaders must pursue strategic reinvestment in fully integrated identity security architectures—not just band-aids of AI overlays—to achieve sustainable security resilience and escape the never-ending trap of complexity and integration challenges.

“The identity talent shortage is not just a problem for HR and it will intensify into a security breaking point in 2026. Unless IT leaders immediately pivot to treating user experience as a competitive asset, they will be on the brink of losing the best talent. The security team is already drowning in operational friction; overlooking the human factor with clunky, frustrating systems is a short-sighted approach. Without prioritizing UX to reduce operational drag, the talent gap will widen, driving burnout, organizational security incidents, and major operational risks. IT leaders must champion integrated strategies that make worker experience the cornerstone of identity security—it is the non-negotiable prerequisite for building a resilient business.”


Jesse Scott, Chief Security and Trust Officer at Opal Security

Machine identities are the biggest security threat hiding in plain sight.

“Enterprises run on swarms of API keys, service accounts, and background processes that now outnumber humans many times over. They accumulate privilege quietly and rarely expire—exactly how attackers have breached environments through long-lived cloud keys, overbroad defaults, or forgotten service creds. AI code assistants occasionally mint new tokens or scaffolds with permissive access, adding more drift to an already swollen layer. The next breach won’t crack a login; it’ll walk through a machine identity no one remembered creating.”

AI agents are the new superusers, one prompt away from going sideways.

“Once an AI system can approve access, rotate secrets, or trigger changes, its mistakes scale instantly. We’ve already seen support bots coaxed into leaking sensitive data and autonomous agents take unintended actions from subtle prompt manipulation. These systems aren’t edge cases; they’re privileged identities. Tight scopes, full logging, instant revocation: treat them like admins, because the exploit surface is now persuasion, not malware.”

Regulators aren’t ready for self-driving identities, but auditors are already circling.

“AI is generating infrastructure and granting access where humans once signed off, yet most of these decisions leave no explainable trail. Early audits have already flagged AI-produced configurations as unauditable, and regulators are beginning to ask who is accountable when an algorithm misassigns privilege. In 2026, companies will need full decision logs, human review for high-impact actions, and a clear record of why automated systems acted at all. The compliance gap is where risk hides, unless you close it first.”


Arun Shrestha, Managing Director at KeyData Cyber, and CEO and Co-Founder at BeyondID

The Rise of AI-Native Identity Defense

“AI introduces autonomy. AI agents can now create, deploy, and modify other agents without human involvement. Each agent becomes a non-human identity with credentials, privileges, and access to sensitive systems.

“By 2026, enterprises will move from managing thousands of human users to governing hundreds of thousands –or even millions – of machine identities. Each one expands the attack surface. A single over-privileged AI identity can enable autonomous compromise at machine speed. This forces a shift to Defense by Default, where security is built into every AI system from inception. Organizations will enforce first-class identity for every agent, least-privilege access by default, behavioral constraints, and mandatory human oversight for high-risk actions. Security will no longer be wrapped around AI; it will be embedded within it.”

AI Agents in Action: Streamlining the Identity Lifecycle

“Legacy identity governance was built for static environments and heavy human intervention. That model no longer scales. As identity volumes explode, organizations will increasingly deploy AI agents to manage the identity lifecycle itself.

“By 2026, AI agents will automate onboarding, access provisioning, role changes, and deprovisioning based on HR data, business logic, and real-time context, executing in minutes what once took days. These agents will document every action for audit readiness while continuously cleaning up dormant accounts and mismatched entitlements. The benefits are immediate: faster onboarding, reduced help desk strain, improved security posture, and measurable ROI. The constraint will be operational readiness. Organizations must maintain clean processes, transparent rules of engagement, and continuous monitoring to keep agent-driven IAM trustworthy.”

Just-in-Time Privilege Becomes the New Norm

“Standing privilege remains one of the most significant and avoidable security vulnerabilities in the enterprise. By 2026, that model will be phased out. Just-in-Time (JIT) access will become the standard. Access is granted only for the specific task, session, and after a risk assessment has been completed. Once the work is completed, access is revoked. AI continuously monitors whether the request remains aligned with the intent and typical behavior.

“The true breakthrough lies in the authorization model. Static, role-based entitlement systems will no longer be viable. Organizations will transition to dynamic, policy-driven authorization that considers context, identity signals, and real-time risk. This approach makes JIT not just time-limited, but purpose-limited. Access is narrowly tailored to what the user needs to do, why, and under what conditions.

“As attackers persist in exploiting over-permissioned accounts, JIT combined with adaptive authorization will shrink the enterprise attack surface more effectively than any other security control available.”

AI becomes its own identity

“In 2026, security teams will shift from treating AI as a tool to treating it as a first-class identity. The explosion of AI agents and non-human service accounts is creating an attack surface too large for rules-based security. Organizations will need autonomous, AI-native identity defenses that can detect and adapt at machine speed.”


Bojan Simic, CEO of HYPR

“By 2026, multi-factor identity verification will not only influence security strategy—it will define the fundamental architecture of enterprise protection. The legacy era of defending passwords and network perimeters is over. The real vulnerability lies at the point of human interaction, and the entire security stack is irrelevant if you cannot verify the user behind the keyboard. Social engineering has already shifted to AI-driven impersonation at scale, blending seamlessly into business workflows. Legacy authentication can’t stand up to that. Identity-first section isn’t a future plan – it’s the immediate, non-negotiable foundation of modern defense.

“Forward-thinking organizations will treat identity verification as continuous and multi-layered — anchoring security to the human and their trusted device. The helpdesk, once the easiest way in, will be transformed into a high-assurance identity checkpoint powered by proof, not intuition, and strengthened with biometric liveness, device-bound credentials, and behavioral intelligence working together behind the scenes. Even the help desk, to a company, will become a high-assurance identity checkpoint. Automation is the new battleground: Attackers using AI to generate trust at machine speed; defenders must prevail by confirming authenticity with the same speed and precision. The companies that succeed in 2026 will build the capacity to verify what is real instantly.”


Brian Soby, CTO and Co-Founder at AppOmni

“We’re likely to see the perimeter continue to erode in 2026, through concepts such as zero trust network access (ZTNA), which begin at the user’s device and extend to their target destination, whether that’s within a virtual private cloud or a SaaS application. This transport layer of a Zero Trust architecture will likely become a common, if not dominant, method of securely connecting devices to destinations, making the traditional perimeter increasingly irrelevant.

“Most current zero trust and identity solutions are not keeping pace with real-world attacker tactics, techniques, and procedures (TTPs). As a result, in 2026, we’ll see more of what we’ve been seeing: Attacks adapting to target the weakest links. There’s no question that the success of ShinyHunters/UNC6040 and Drift/UNC6395 has caught the attention of other threat groups. They will view those incidents as clear examples of the weaknesses in today’s zero trust technologies and will double down on similar attack methods.”


Anand Srinivas, VP of Product and AI at 1Password

AI Agents Will Break Identity Silos and Force a Security Revolution

“Today, few organizations have deployed agentic AI in production. But, as more companies begin to operationalize agentic AI at scale, its unpredictable interactions will expose a new class of identity and access management challenges. Up until now, identity, secrets, and access management solutions have been siloed across different organizations responsible for application or workforce identity security. That worked when applications were deterministic, well-bounded entities all operating within centralized policy frameworks. However, agentic AI behaves as both traditional software and as a user that operates outside existing identity systems, thereby introducing new identity threat vectors.

“Securing this new paradigm will require breaking down the identity silos and creating a unified, policy-driven identity fabric that governs access deterministically, not probabilistically. In doing so, a new generation of cohesive, secure-by-default identity management solutions will emerge that protect all access for human, machines, and AI identities.”


Patrick Sullivan, CTO of Security Strategy at Akamai Technologies

Reputation manipulation will become the fifth layer of extortion in 2026.

“As ransomware evolves beyond encryption, theft, and DDoS, attackers will weaponize misinformation to erode trust and amplify pressure. Instead of just leaking stolen data, threat groups may turn to fabricating or altering content, such as falsified emails, AI-generated screenshots, or deepfaked statements, to damage reputations and force payments. In this new world, organizations must blend cybersecurity with crisis communications, digital forensics, and rapid-response verification to counter false narratives. In 2026, the ability to prove authenticity faster than attackers can spread lies will define resilience.”


Karthik Swarnam, Chief Security and Trust Officer at ArmorCode

Quantum Risk Gets Real.

“Quantum computing will soon be harnessed by both security teams and adversaries, pushing the conversation from theory to action. Attackers are already harvesting encrypted data for future decryption, while defenders explore quantum power for stronger modeling and detection. As this new risk layer emerges, organizations will invest heavily in data protection programs, mapping encryption, and preparing for migration to quantum-safe algorithms. Those that integrate quantum readiness into overall risk management in 2026 will be best positioned to adapt as breakthroughs accelerate.”

AI Agents Force us to Rethink Privileged Access Controls.

“As organizations adopt AI agents to perform tasks across infrastructure and security operations, they must be treated like privileged identities, with clear access boundaries, attribution, logging, and human oversight. Next year, the focus will shift to building the guardrails required to prevent a single unintended agent action from cascading across an entire environment.”


Valentin Vasilyev, CTO and Co-Founder at Fingerprint

AI-Powered Fraud Will Overwhelm Teams Still Relying on Legacy Defenses

“AI-driven fraud already makes up 41 percent of all fraud. It will only continue to increase as it gets more difficult to discern whether a message is genuine or phishing, a video is real or a deepfake, or whether a website visitor is a human, a bot, or an AI agent acting on behalf of a human. In the fraud space, “fraud-as-a-service” has evolved into a dark web marketplace, where bad actors sell or rent hacking tools, stolen data, and ready-made attack kits.

“Suddenly, deploying sophisticated fraud schemes is possible for anyone with access to generative AI, even those with limited technical skills. Legacy defenses like CAPTCHA and multi-factor authentication (MFA) aren’t enough anymore. Companies, particularly in the banking and fintech industries, will need to adopt more adaptable solutions that can analyze behavioral, device, network, and other signals in real-time to stop AI-driven attacks before they can do real damage.”


Paul Walker, Field Strategist at Omada

By 2026, if you’re not treating NHIs as first-class citizens in your identity program, you’re fundamentally exposed. 

“Traditional identity governance and administration (IGA) was built for humans. We’re discovering huge numbers of machine identities that have never been governed. OWASP has released its Top 10 Non-Human Identity Risks for 2025, with ‘improper offboarding’ ranking number one.  The fact that ‘improper offboarding’ ranks as the number one risk reveals a fundamental gap: organizations have no systematic process for deprovisioning machine identities when services are deprecated, applications are sunset, or integrations are discontinued.

“Consider what happens when a development team spins up a service account for a proof-of-concept project. That credential often persists long after the project ends, maintaining broad access to production databases or cloud resources. Multiply this scenario across hundreds of development initiatives, and you have thousands of orphaned credentials each representing a potential attack vector.

“Attackers increasingly target these ‘ghost’ identities precisely because they’re unmonitored and frequently over-privileged. If your IGA can’t see it, you can’t govern it. The proliferation has been exponential. Cloud-native architectures, microservices, DevOps automation, and AI agents have each contributed to an explosion of machine-to-machine authentication. Every CI/CD pipeline, every containerized application, every automated integration creates new credentials that often live indefinitely, accumulate privileges over time, and remain invisible to traditional IGA platforms that were architected in an era when ‘identity’ meant a person with an employee ID.”

The gap between “digital transformation” and “basic identity hygiene” will remain catastrophically wide. 

“2025 marked an inflection point where non-human identity security transitioned from a niche concern to a mainstream crisis. It is surprising that in late 2025, mature organizations with significant security investments could still be completely paralyzed by compromised machine credentials that hadn’t been rotated in years and social engineering attacks on third-party helpdesks.

“Take Jaguar Land Rover’s catastrophic breach that forced a complete global production shutdown that lasted over four weeks and cost an estimated £50 million per week. Another example is Marks & Spencer’s devastating Easter weekend attack via a third-party vendor compromise that shut down online operations for six weeks, resulting in £270-440 million in combined losses. What makes these incidents particularly alarming is the attack vector: both breaches originated through compromised non-human identities in partner systems service accounts, API keys, and third-party access tokens that had never been properly governed, rotated, or monitored. These weren’t theoretical risks. They were billion-dollar disasters caused by the exact NHI governance failures that security experts had been warning about. The UK government provided the first-ever government-backed loan ($2 billion) to a company for a cyber incident, signaling this was considered a national economic crisis, not just a corporate problem.”


Want more insights like these? Register for Insight JamSolutions Review’s enterprise tech community, which enables human conversation on AI. You can gain access for free here!

Share This

Related Posts

Follow Solutions Review