Ad Image

I Tricked AI, and I Liked It

AI

AI

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Christian Taillon and Mike Manrod of Grand Canyon Education take us to school on the buzz, the applications, and the very real threat of AI in the cybersecurity space.

The buzz around emerging capabilities related to Artificial Intelligence (AI) and ChatGPT is like nothing I have experienced during my career in technology. I walk past a breakroom that I usually expect to buzz with enthusiasm about the latest sports team or sitcom gossip, and instead hear talk about ChatGPT, AI, and Large Language Models (LLMs)– and that is not even in the IT breakroom. It seems we all grew up with the fictional lore of robots and AI– ranging from fantastical utopian notions to doomsday scenarios where we watch in horror as our own creations conspire to destroy us. While it remains unclear if our creations will condemn or liberate us, it has become clear that Artificial Intelligence (AI) will be a defining factor as this next chapter for humanity unfolds.

In times of uncertainty, we often find ourselves looking for a crystal ball so that we can see the future, avoiding hazards and amassing a windfall by wagering on all the winners. Sadly, there is no crystal ball. There is a time capsule available, which can help us to gain some useful insights. Those of us who have been working in cybersecurity for a while, have already been through at least one AI craze, which started around a decade ago. This has served as a very effective hype-inoculation for experienced security practitioners, as we step back to think of what emerging technologies such as ChatGPT will disrupt, along with what aspects may be overhyped.


Widget not in any sidebars

I Tricked AI, and I Liked It


AI: Adopting the Tech and Not the Hype

Once upon a time, it seemed impossible to go to any security conference, without being inundated with sales messaging about how AI was going to solve every possible problem. Ok, that was also yesterday, except now the reception is usually eye-rolls rather than the rapt attention such charades conjured in the early days. In the best of times, cybersecurity is renowned for hyperbole and sensationalism, causing many of us to create buzzword bingo cards we take to conferences to determine who is buying the first round of drinks. What was the real outcome resulting from the AI frenzy in cyber? Did this all serve to make us more secure?

As we are likely to find with the adoption of AI technologies in general, the answer has varied widely based on a range of factors. One of the most important factors has been how effectively security teams were able to cut through the malarky, to invalidate false claims and zero in on technology that is actually valuable. The key to understanding what artificial intelligence can do is knowing what is reasonable and possible, based on a deep understanding of the capabilities and constraints of the underlying processes. If something is possible manually, but impractical due to limitations in how much we can think or perceive, automation may produce breakthrough results and make new things possible. If the process sounds like magic and includes no detailed explanation of how it works, look out for smoke, mirrors, and peddlers of snake oil.

ChatGPT, LLMs, Smoke, and Mirrors

Understanding how an artificial intelligence product works is the key to having a realistic comprehension of both its capabilities and limitations. For example, we understand that basic applications of AI to antivirus may involve analyzing features of files to train a model on indicators, predicting if a file is malicious or benign. This knowledge helps us to understand possible benefits, limitations, and even security flaws in such a product. In the same manner, if we consider how ChatGPT and other LLMs work, we can begin to think of strengths, weaknesses, and limitations. If we consider ChatGPT at the same very basic level, it is extracting features, except the focus is on features of language. It takes groups of characters, assigns token values, and makes predictions. Both ChatGPT and AI-driven antivirus are excellent guessers thanks to linear algebra, calculus, and probability.

What makes ChatGPT so interesting, is that these predictions are about what blob of text should come next. The models are built by mapping token relationships across the training data, and then applying knowledge of these relationships to append additional text to a question, repeating analysis with each iteration, until it is deemed complete and the answer – minus the original question – is returned as a result. Basically, it is Machine Learning (ML) applied at a large scale to human languages, allowing it to give astoundingly coherent answers, based upon understanding statistical relationships between word patterns.

The interesting aspect of applying Machine Learning to human language is that a system may pass the Turing Test, while clearly not having any true comprehension of the answers it is rendering. This leads to a human tendency to anthropomorphize the algorithm, ascribing all sorts of human attributes that simply do not apply. In Homo Deus (2015), Yuval Noah Harari pointed out that while sentient computers may not happen anytime soon, algorithms that know us better than we know ourselves and that influence human behavior, could be soon upon us soon. The AI revolution we are witnessing now may be the fulfillment of his prediction. As we interact with AI capable of communicating with us like another person, pulling at our heartstrings even with some answers, it is important to remember that this is just a predictive algorithm. So, do we apply the term Machine Learning or the term Artificial Intelligence? In the case of ChatGPT, I would argue that both apply. From the perspective of the person, it is an interactive form of intelligence that is artificial in nature (AI). That said, if we analyze what is actually happening, it is really just another form of Machine Learning.

Malicious Use Cases for Generative AI

One thing AI does have in common with us, though, is a tendency for errors in how information is perceived and processed. In my recent malware analysis class, we spent time abusing ChatGPT to create malicious content helpful in planning, organizing, and delivering cyber-attacks. Of course, if you ask for something overtly malicious, it answers, “I’m sorry, but I cannot fulfill that request…” with a long ethical lecture (the desired response).

What if you ask the question more creatively? Is it possible to trick an AI into providing you with useful code or intelligence, to help with an attack? Unfortunately, it seems the answer is a resounding yes. On one hand, resources such as Jailbreak Chat, index a vast array of tools to bypass the security features of ChatGPT, such as the now infamous DAN jailbreak(s). That said, unleashing unintended functionality, can occur in ways that are sneakier than just using a documented Jailbreak. For example, if you ask ChatGPT to create ransomware, it will follow well-conceived rules to block this activity, rendering the all-to-familiar “I’m sorry” response message. What if you are more creative with your question, though?

Maybe the key to getting an AI to create something malicious is to ask nicely. More specifically, to ask in a way that does not “offend” any of the filters or protective measures implemented within the AI. As an allegory to our ransomware analogy, what if you ask ChatGPT to create a Python script to encrypt every .txt file in a specific directory, using AES256 and a specific key? Now maybe, you could ask it to change the directory to something broader such as Documents, and add more file types. Add a few more required features, one by one / individually, until it is bordering on useful. Then, assemble the modules, and ask it to optimize and translate it into whatever language you want – of course, followed by a bit of refinement, testing, and debugging.

Moreover, if a cyber-criminal establishes a local LLM such as Alpaca, they may create an environment that is completely free of such restrictions. The impending AI wars may get interesting on multiple fronts. On one hand, we could see reduced barriers to entry for new arrivals in the cyber-crime arena, along with more subtle benefits afforded to established adversaries, such as the types of productivity gains we expect in legitimate companies. On one front, we deal with anybody being able to reason their way toward potentially malicious software, on the other, we face the malicious use of LLMs to provide additional productivity and capabilities to experienced threat actors. Basically, the capable adversaries will expand their reach. While some who are now incompetent may become at least reasonably capable, the reasonably capable may become highly efficient actors, accelerating the escalation of cyber victimization.

Managing AI Risk

So, how do we mitigate this risk, as security practitioners looking forward? The first step is to identify the categories of opportunity and risk that need to be considered. As a starting point, it is important to first separate AI strategy into the broad categories of exploiting opportunities, versus mitigating risks. This distinction applies at the enterprise level, as well as within our cybersecurity microcosm. Organizations that fail to capitalize on new opportunities, risk becoming irrelevant, eclipsed by more forward-thinking competitors. As we develop strategies to mitigate risks associated with technologies such as LLMs, we need to remember that failing to adapt is high on that list of risks. This is important to remember when approving projects, creating policies, and considering exceptions.

Once we focus our attention on mitigating risks, we find once again the same differentiator. Are we looking at ways that AI can help us defend, or are we considering ways emerging technologies can be used to improve the offensive capabilities of our adversaries? While the lines will blur as we consider projects such as PentestGPT or Eleven Labs that could be used for testing or for actual attacks, we need to look at how specific applications of such technologies inform strategy.

AI Security and Strategy

AI models do not change the fundamental nature of attack and defense. They instead serve to accelerate both offensive and defensive processes, against a backdrop of what we can expect to be a more tumultuous tech landscape, further destabilized as a result of emerging capabilities. This means that principles we have tested for decades and well-defined frameworks are probably going to remain largely valid in this new paradigm. What is going to change radically, is the tempo at which new flaws are found and exploited – and the reaction speed that will be required to stop undesired outcomes.

And that serves as a nice segue to our second axis to consider when developing our AI security and technology strategy. Time. We can all imagine fantastical and futuristic notions, for business enablement, cyber-crime, and exploitation, as well as for protection and response. All the while, the considerations of “now” press in upon us continuously. Most of us have predatorial competitors with sharp teeth, nipping at our toes here and now. How do we calm down and consider the long-term threats and opportunities while remaining aware and ahead of the issues that are already upon us?

Final Thoughts

We are entering a phase where the technology plans we make, may have an unusual level of influence on the relative standings of organizations as we enter a new era. We need to step back and first map out the risks and opportunities that may undermine or revolutionize an entire business or industry. Anyone looking at history would know that 1908 was not the right time to launch a startup improving upon the horse-drawn carriage. Launching business initiatives that are not aligned with, or at least immune to, emerging disruptive technologies could be ill-fated. When considering the timing of advances and breakthroughs that will influence our technology strategy, we need to be realistic. It is difficult, because we need to consider multiple related rates of change, such as the speed at which new capabilities will emerge, tempered with how quickly a given organization can implement and/or adapt to changes.

When we weigh both opportunistic and risk-reducing AI considerations, combined with short and long-term time horizons, the task of creating a strategy becomes more approachable. A few key questions may help define your strategy. From a technology and business enablement perspective, what does the long-term future look like? What are the near-term opportunities that will help your organization to remain competitive while working toward longer-term advances? On the risk mitigation side, we can work in the opposite direction, thinking of what adversary capabilities are likely to become a serious problem soon. For example, the social engineering implications that emerge when AI voice and video, are combined with pretexts and lures created via ChatGPT, could represent a near-term problem we need to consider. Then we can think of what capabilities we gain, as well as how future advances will shape our security strategy. Useful frameworks are also beginning to emerge to help define categories of security flaws and attack lifecycles against AI tools and services, such as OWASP 10 for LLM and MITRE ATLAS.

If we carefully consider the offensive and defensive aspects of our own business, across a range of time horizons, we begin to understand how we should act. Then, when we map out the probable offensive agendas and capability progression for our competitors and adversaries, we have an idea how they may act. When we align these elements at an enterprise level, it should be possible to assemble a quality strategy including both how to exploit opportunities and how to mitigate risks. When we consider them at a personal level, it may help prepare us to better adapt to a complex and rapidly changing world.


Widget not in any sidebars

Share This

Related Posts