Navigating the Minefield: A Critical Look at 12 Hidden Threats in Generative AI

Navigating the Minefield: A Critical Look at 12 Hidden Threats in Generative AI

- by David Sweenor, Expert in Data Management

Most people are familiar with some generative AI risks–most notably–their propensity to hallucinate. But, did you know there are at least 11 other risks associated with large language models (LLMs)? In my article The 12 Hidden Risks of ChatGPT and Generative AI I outlined a series of hazards that can be deployed to generative AI models, like OpenAI’s ChatGPT. These risks extend from the creation of convincing deepfakes and the spread of disinformation to the perpetuation of biases ingrained in their training data. In addition to the twelve risks, I’ve outlined a series of countermeasures organizations can deploy.

The 12 risks include:

1. Wrong Answers and Confabulations:

Also known as hallucinations, generative AI models are not 100% accurate, making them unreliable. Remember, foundation models (FMs) and large language models (LLMs) are just big calculators. The only context they have across words, sounds, pixels, and code is that the next word or pixel generated is statistically likely to be the correct one–like your Netflix or Spotify recommendations. Developers of AI systems can toggle these probabilities so there is a balance between not being too repetitive and still being accurate.

2. Harmful Content:

Since models were created with data that was indiscriminately harvested from the internet, they contain all of the hate speech, pornographic images, detailed instructions for how to plan attacks, and content that exploits people. For example, the abhorrent stories of deep fake nude pictures being used to abuse and victimize teenage girls are simply unconscionable.[1] Thankfully, most LLM providers have implemented AI guardrails that help prevent the generation of these vile materials. However, recent research has suggested that fine-tuning these models can inadvertently override these safety guardrails, as mentioned in my article Mission AI Possible: Safeguarding Your Business with Self-destructing AI. These risks will always exist unless you’ve built your models from scratch with carefully curated, pristine data.

3. Perpetuation of Biases and Stereotypes:

Similar to harmful content, LLMs can perpetuate biases and stereotypes. This includes perpetuating societal prejudices and stereotypes and the potential of the model to reinforce and reproduce specific biases and worldviews. The Washington Post’s story This is how AI image generators see the world provides examples of this with image models.[2] However, this is not limited to image generators; it’s embedded in all the different models. Stanford’s Human-Centered Artificial Intelligence (HAI) has also published Demographic Stereotypes in Text-to-Image Generation on these stereotypes.[3]

4. Disinformation and Influence Operations:

Well, I must admit, I’m not looking forward to the next U.S. election cycle. The amount of disinformation on social media and the internet is disheartening. It poses risks to information integrity and organizational reputation. Misinformation can erode trust in your brand, sway consumer opinions, and impact financial markets. For instance, a well-coordinated disinformation campaign could falsely implicate your company in unethical practices, rapidly spreading through social media and news outlets. This could be detrimental to an organization. Sadly, there’s no reliable way to detect and mitigate these risks. As mentioned in my article Generative AI’s Powers and Perils: How Biden’s Executive Order is Reshaping the Tech Landscape, there are directives aimed at counteracting deep fakes and ensuring that what you read is authentic. However, techniques such as digital watermarking are still in their infancy and easily circumvented.

5. Proliferation of Conventional and Unconventional Weapons:

Also mentioned in the Biden Administration’s Executive Order (EO) on AI, civilian organizations and the military are looking at this quite closely. Businesses need to worry about how their product or services could be used for nefarious purposes through technology transfers, supply chain complexities, and data sharing. We’ve seen Nvidia’s GPU chips being blocked by export controls; what products and services are to follow?[4] In terms of effectiveness, given the proliferation of open-source models and leaks, I’m not sure what impact this will have, but I’ll leave that to the policymakers.

6. Privacy:

Besides hallucinations, this is probably the second most discussed topic related to LLMs. Many of these models contain lots of data considered PII or sensitive. There are two aspects to consider. For the end-users who are not using an enterprise version of services like ChatGPT, the information entered is not protected and is used to train the models for others. Putting sensitive or private information into the system is not advisable. The second thing to consider is the data used to train the model. A research team extracted a bunch of ChatGPT’s training data at scale for a couple of hundred dollars.[5] With the latest version of GPT, the technical report states, “GPT-4 is a Transformer-style model pre-trained to predict the next token in a document, using publicly available data (such as internet data) and data licensed from third-party providers…..Given the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”[6] Thus, we may never truly understand the actual risks of data privacy when relying on these models.

7. Cybersecurity:

Cybersecurity is an evergrowing threat that demands focused attention and ongoing actions. For LLMs like GPT-4, there’s an added dimension to these risks since they effectively lower the cost and barriers (in terms of skills) to creating cyberattacks. For example, bad actors can use generative AI to create social engineering campaigns or find flaws in existing security tools. Unfortunately, malicious actors are now orchestrating sophisticated cyberattacks at an increased pace more efficiently.

8. Potential for Risky Emergent Behaviors:

Some would argue that as LLMs become more powerful, they’ll crave more power and turn on humans Terminator-style. I’m not convinced. They’re not sentient–just big calculators. Since these are very large, complex neural networks, I would argue that similar to what we see with traditional AI (a.k.a. predictive AI), despite all of our best inventions, they could start generating biased decision-making recommendations. This can lead to skewed business insights or unfair customer experiences, affecting the company’s reputation and compliance with regulations. As I argued in my article, GenAIOps: Evolving the MLOps Framework, it’s relatively well-known and straightforward on how to do this with numeric outputs. However, when we have words, code, images, audio, and video, to my knowledge, there’s no reliable programmatic way of monitoring for drift.

9. Interactions with Other Systems:

Remember when COVID-19 hit and the world as we know it changed forever? For many businesses, COVID broke all of their predictive models. As systems become more complex and entangled with others, minor deviations in expected inputs and outputs could lead to compromised data quality, creating wildly unpredictable outcomes–like the proverbial butterfly flapping its wings and making a hurricane on the other side of the world.

10. Economic Impacts:

McKinsey estimates that generative AI will add up to four trillion dollars to the global economy.[7] Companies should look at this through two lenses. The first is how generative AI will change their market dynamics–essentially, how much it will disrupt current business models. The second is the impact on an organization’s employees. Many knowledge workers have a certain amount of trepidation about how generative AI will impact their livelihood. Goldman Sachs estimates that generative AI will create more jobs than it destroys.[8]This may be true, but what if your job is eliminated? Organizations must walk a fine line between remaining competitive in the market and driving operational efficiencies with generative AI. For example, automating key business processes with AI might enhance efficiency but also require a shift in employee roles, demanding new skill sets and potentially impacting job security.

11. Acceleration:

Generative AI development is advancing rapidly, with major providers becoming obsolete and breakthroughs quickly rendering tech stack choices outdated. The widespread adoption of this technology presents challenges in governance and legal frameworks. Careful consideration is necessary to avoid operational inefficiencies, legal problems, and public backlash that may undermine the intended benefits.

12. Overreliance:

Generative AI can be both powerful and deceptive. While algorithms provide recommendations, it’s crucial to remember that they don’t possess intent, meaning, or human values. Overreliance on generative AI can dull our own judgment, leaving us vulnerable to errors and missed opportunities. Automated decision-making systems may overlook unique situations that require human insight, leading to flawed strategies. Balancing the benefits and limitations of generative AI is vital to maintaining sound decision-making in business.

In addition to the risks associated with generative AI, I have outlined several countermeasures organizations can deploy.

Please read the full article at The 12 Hidden Risks of ChatGPT and Generative AI.

If you enjoyed this article, please like it, highlight interesting sections, and share comments. Consider following me on Medium and LinkedIn.

If you’re interested in this topic, consider TinyTechGuides’ latest books, including The CIO’s Guide to Adopting Generative AI: Five Keys to SuccessMastering the Modern Data Stack, or Artificial Intelligence: An Executive Guide to Make AI Work for Your Business.