Ad Image

Shadow AI Joins Shadow IT, Creating New Challenges for Risk & Security Teams

LogicGate’s Nicholas Kathmann offers commentary on shadow AI and how it creates new challenges for risk and security teams. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

People are always searching for a simpler way to do things. “There’s an app for that” has become a cliché as developers look for new ways to make the lives of their users easier. But while a scheduling app or a notetaking feature might be helpful in your personal life, the dynamic changes when these applications are used in a work environment. If a shady calendar app steals your personal information, that’s one thing. But if it steals your employer’s data—or worse, customer or client data—that’s an unacceptable risk. The use of unapproved applications on company devices has become known as “shadow IT,” and it poses a significant problem for security and IT departments.

The advent of AI has exacerbated the issue even further. After all, AI is the ultimate Swiss army knife—employees are using generative AI tools to do everything from summarize articles to analyze complex data sets—and this is creating significant challenges for security and risk management teams. Much of the risk around today’s generative AI tools centers around improper sharing of sensitive or confidential data, and if risk management teams cannot control (or even see) what information is being shared with those tools, that’s a real problem. Today’s organizations don’t just need to worry about shadow IT—they need to recognize and mitigate the threat of shadow AI as well.

The Rising Threat of Shadow AI

When it comes to shadow AI, visibility is one of the biggest challenges. It’s not just that organizations lack the ability to detect unauthorized AI usage—it’s that many aren’t even looking. The 2025 IBM Cost of a Data Breach report focused heavily on the issue of shadow AI, and researchers found that just 34% of organizations perform regular checks for unsanctioned AI use, leaving employees free to experiment with a wide range of AI tools with little to no oversight. That’s less than ideal, especially since 20% of the organizations studied in the report indicated that they have suffered a breach that involved shadow AI. Worse still, breaches involving shadow AI cost organizations an average of $670,000 more than those without the involvement of shadow AI.

Unfortunately, the fallout from shadow AI doesn’t end there. The involvement of shadow AI in a data breach was found to result in a greater volume of personal identifiable information (PII) and intellectual property being compromised. Additionally, that data was frequently stolen from multiple locations, highlighting the fact that unauthorized AI usage can create risks and vulnerabilities that span a wide range of environments. The problem is so severe that IBM now considers shadow AI one of the three most costly breach factors tracked in its annual report, even overtaking the ongoing skills shortage. Shadow AI isn’t just a theoretical problem anymore—it is having a real, quantifiable impact on businesses across a wide range of industries.

Establishing Guardrails for Around Shadow AI

While it’s true that identifying and preventing the use of shadow AI is difficult, there are simple steps that organizations can take to mitigate its impact. Generative AI tools like ChatGPT, Gemini, Perplexity, and others are freely available to the public—but more importantly, they are largely browser-based. IT and security teams can (and should) track when employees visit those websites, and if it is determined that employees are regularly turning to ChatGPT during the course of their work, it’s a good idea to examine what information they may be sharing.

If necessary, organizations can limit (or even block) access to those websites, although this is a double-edged sword—blocking generative AI tools entirely may simply lead employees to use them on their personal devices where IT has less control. Cutting off access to AI tools entirely also risks stifling innovation, especially for businesses in rapidly modernizing industries. AI itself isn’t the problem here—it’s determining how to use it safely and securely. A strong risk management platform can help organizations better understand the risks and benefits associated with these actions.

It’s also important for security training sessions to include AI awareness training. Even those who use AI on a regular basis don’t always understand what it is, how it works, or what constitutes risky activity. It’s important for every business to have a clearly laid out AI governance plan that details which use cases and solutions are permitted and why, and additional training sessions should be held whenever new tools or use cases are adopted. The training element is essential here: the truth is, most employees won’t read written guidelines, which means organizations need to find other ways to encourage their workers to listen to the policy, answer questions, and engage with the training. Regrettably, these policies do need to have teeth. An AI governance policy that exists only on paper isn’t governance at all. Shadow AI creates real risk for the business, and repeated violations need to carry consequences.

Mitigating Trickier Shadow AI Risks

Unfortunately, some shadow AI activity is more difficult to pinpoint. As businesses increasingly embrace AI capabilities, SaaS providers are often enabling AI features by default, forcing users to opt out rather than opt in. That means the scheduling tool, notetaking app, or audio recorder your business uses may include AI functionality you never asked for—and may not even be aware of. The average company now uses more than 100 different SaaS applications, and it can be difficult for IT and security teams to stay on top of which ones provide AI capabilities—especially since new features are added on a regular basis. Employees may use a company messaging service to collaborate, but if that service has AI functions switched on, it may not be a safe place to discuss sensitive or confidential matters.

Addressing that challenge is a bit more complex. Establishing a more stringent vetting process for vendors is an important first step. Different companies have different approaches to privacy and security—some AI providers lay claim to any data shared with them, while others pledge to delete information as soon as it is entered. Business should already be thoroughly vetting end-user license agreements (EULAs) before signing on with any partner, but it’s increasingly important to identify any language pertaining to data ownership, AI model training, and other potentially risky elements. One way to limit risk is to avoid working with vendors that cannot

promise the necessary level of data privacy, as well as vendors with a history of data breaches or careless behavior. It won’t solve the whole problem, but it will help limit the potential impact of shadow AI usage.

Reducing Exposure to Unnecessary Risks

AI governance remains a work in progress for today’s businesses, but addressing the issue of shadow AI needs to be a priority. Risk and security professionals must work in lockstep to establish visibility into when, where, and how AI tools are being used within the organization and understand the associated risks. By limiting access to the riskiest tools, establishing strong training programs, and vetting partners and vendors based on their approach to AI and data security, today’s businesses can significantly mitigate the dangers posed by shadow AI usage. Shadow AI isn’t going away—it’s a problem that will be with us for quite some time. But the right approach can help organizations avoid leaving themselves exposed to unnecessary risks.

Share This

Related Posts

Follow Solutions Review