Navigating the AI Revolution: Fostering Team Resilience in a New Era of Intelligent Threats

Laura Ellis, VP of Data and AI for Rapid7, explains why companies must foster team resilience in their security efforts if they want to defend against increasingly intelligent threats. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.
The cybersecurity industry is at an inflection point as we barrel into the AI era, driven not only by increasingly sophisticated attackers but also by the rise of smarter, more advanced tools. Much like the Industrial Revolution mechanized the physical world, artificial intelligence is rapidly transforming the cognitive landscape. For security teams, we need to redefine the rules of defense and engagement.
Defenders are firsthand witnesses to how generative AI and autonomous systems dramatically increase the speed, scale, and complexity of attacks. However, our security postures are also evolving with access to a new generation of tools that enhance automation and strengthen defenders’ efforts. We find ourselves at a pivotal moment that demands new mindsets, guardrails, and definitions of what it means to lead during this era of intelligent threats.
The Rise of AI-Powered Threats: Faster, Not Always Smarter
From sophisticated social engineering attacks to deepfakes and voice cloning, which make impersonation more realistic than ever, it is clear that AI has undeniably transformed the threat landscape. Malicious actors are no longer required to be expert coders to execute successful phishing campaigns, lowering the barrier to entry for cyber-attacks. While these threats may not be more intelligent at their core, they are significantly more scalable and harder to detect using traditional methods. They’re faster, cheaper, and everywhere.
This shift undermines longstanding assumptions in security operations. Traditional threat detection methods are becoming less reliable, as many current systems are built on the assumption that intelligent threats can be detected through static signatures or heuristic patterns. But AI-generated attacks can morph quickly, often bypassing those conventional defenses. To keep up, security teams must prioritize verification over mere identification. Multi-factor authentication, for example, must now be the norm: validating identities through multiple channels, enforcing second-factor callbacks, and even using designated safe words in verbal communications. These strategies treat trust as something to be actively earned, not passively assumed.
Agentic AI and the Changing Shape of Security Teams
AI in cybersecurity is no longer confined to a passive role. We’ve entered the era of agentic AI, in which tools go beyond decision-making to take autonomous action. AI agents are evolving from narrow-task assistants to active collaborators. Agents can triage alerts, enrich data with threat intelligence, and even initiate semi-automated protocols to isolate compromised systems or block malicious traffic. With AI performing these tasks, human capital can be used to focus on high-impact strategy and forensics.
But this leap in autonomy brings new challenges. If security leaders can’t explain where and how AI agents are making decisions, they’re neither collaborating with AI nor leveraging it—they’re surrendering responsibility to the technology. Governance is what keeps them accountable and in control. That starts with understanding where generative AI services live across the environment, enforcing security best practices specific to AI and ML workflows, and maintaining unified visibility that helps teams quickly distinguish what’s risky from what’s routine.
Guardrails must be built into every stage of deployment, from structured review cycles to escalation and continuous oversight. These systems must be transparent and held to the same accountability standards as their human counterparts. It’s not just about adding AI to the team but also ensuring it remains a tool, not a threat.
Talent and Expertise in an AI-Driven World
With AI accelerating every aspect of security operations, the definition of “expertise” is shifting under our feet. On the one hand, AI-assisted coding tools enable developers to create more functional code with minimal training, thereby helping smaller engineering teams achieve significant gains in productivity with “vibe coding” and increasing the efficiency of those capable of coding on their own.
On the other hand, there’s a growing risk that overreliance on automation will erode the deep, contextual knowledge that’s critical in a crisis. Consider aviation: a junior pilot trained on autopilot systems may fly a commercial route with ease until a system fails midair, and manual expertise is required to land the plane safely. In cybersecurity, the same principle applies. AI can support but not replace deep domain understanding.
Security teams need to remodel and realign their workflows in the era of AI. Developers, expected to maintain a broad knowledge of the attack surface, need to shift towards becoming increasingly specialized in training and architecting models. Analysts must evolve into skilled operators of AI who can direct, question, and refine systems to deliver trusted, actionable outcomes. Security leaders need to invest not just in advanced tools, but also in training. That means upskilling teams on AI fundamentals, fostering fluency in emerging technologies, and building a culture that prioritizes continuous learning over static job descriptions.
Accountability Without Ownership: Governing AI Responsibly
It’s easy to assume that AI governance starts and ends with the teams building the models, but that’s only part of the picture. In reality, some of the most significant responsibilities lie with those applying AI in day-to-day operations. Even if an organization only uses pre-built AI tools, those usage choices have real-world consequences, from escalating energy consumption to reinforcing biased datasets. AI risks are business risks, and governance isn’t about authorship but impact.
Think of it this way: you may not run an oil company, but if you drive a gas-powered car, your choices still contribute to emissions. Similarly, security teams using AI have a responsibility to assess and mitigate their downstream effects. This is an opportunity for forward-thinking security leaders to model what ethical AI adoption can look like. Establishing internal AI use guidelines, conducting bias audits, and demanding transparency from vendors are just a few ways to lead with integrity, even in a landscape where regulation is still catching up.
Looking Ahead: Navigating Leadership in an Era of Change
Like any major technological shift, the rise of AI has sparked both enthusiasm and skepticism. Regardless of where you stand, whether as a critic, an advocate, a defender, or even a threat actor, AI is embedded in the fabric of cybersecurity. This transformation has no finish line, only acceleration. For security leaders, success will hinge less on fixed protocols and more on the ability to evolve.
It’s not enough to secure today’s systems; you have to be ready to change your approach tomorrow. That means embracing modular architectures, automating routine tasks while maintaining human oversight and continuously reassessing how emerging technologies shape the threat surface. Above all, it means rejecting complacency. The age of intelligent threats demands intelligent leadership grounded in ethics, driven by curiosity, and unafraid to rewrite the playbook. We’re witnessing the birth of a new security paradigm. The question is: will your team be ready to lead it?