Ad Image

Is Your Security Stack Ready for Generative AI?

Generative AI

Generative AI

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Ashley Leonard of Syxsense examines the current and future state of Generative AI, while posing the question, “Is your security stack ready?”

According to analysts, ChatGPT has the fastest-growing user base in history. Just two months after launch, it reached more than 100 million active users. Not surprisingly, businesses everywhere have joined the AI revolution, with many integrating AI into their products or deploying new AI applications across the organization. In fact, according to a recent survey, 59 percent of companies have purchased or plan to purchase at least one generative AI tool in 2023.

Businesses are eager to unlock AI’s full potential to easily create new content (through text, audio, images, synthetic data, and more), and to some extent, quicken the technological evolution of their existing products and services. While the craze is understandable, there are also concerns about transformative technology – especially when it comes to cybersecurity. For example, there have already been reports of threat actors abusing generative AI through indirect prompt injections that compromise LLM-integrated applications. And Meta’s 65-billion parameter language model was also recently leaked, enabling threat actors to carry out more personalized spam and phishing attacks, and a host of other fraudulent cyber activities.

But misusing technology is just one item on a long list of concerns associated with generative AI and cybersecurity. Sadly, integrating the technology could also leave your business susceptible to copyright infractions, efficacy issues, employee displacement, and ethical missteps. However, the upside of AI in security is too big to ignore and when applied responsibly, it can accelerate and enhance your security posture (or offering).

Download Link to MDM Buyer's Guide

Generative AI: Is Your Security Stack Ready for It?

As you consider how to integrate generative AI into your security stack, consider these three key areas.

Use generative AI as a starting point rather than an ending point.

The United States Copyright Office recently ruled that only work created by human authorship can be privy to copyright protections; excluding all AI output. The reality is, generative AI like LLMs are trained on millions of public domain texts from across the internet. Technically, the model computes all that text to extrapolate ‘new’ or ‘unique’ content based on an input request.

For IT and security professionals who might want to leverage generative AI, it’s likely best to use AI as a starting point rather than an endpoint. For example, you might use ChatGPT to generate a sample code. Treat that as inspiration on how to approach a problem rather than as completed code that you can claim as intellectual property. This eliminates the ownership issue and prevents you from exposing sensitive information about your business that might lead to further compromises.

The technology is fallible, so have a quality assurance plan in place.

By now, we’ve all heard of ChatGPT’s hallucinations. Well, there are hundreds of other stories about chatbots going rogue, being racist, or spreading misinformation. In fact, AI has a long history of painting a dystopian image of its application in society (facial recognition, decision-making, self-driving cars).

Sadly, there is limited room for inaccuracies when it comes to the security of your business. AI lapses like false-positive alerts or blocking of otherwise valid, important traffic, or a mistaken AI-generated configuration, etc., can mean billions of dollars in lost revenue, or expenses, months after a snafu. Put simply, the technology isn’t perfect, so it is crucial to have a backup plan in the case of AI failure and a recovery plan to withstand a potentially damaging fallout with your brand-new chatbot. Be sure to ask vendors with AI-enabled solutions for details on their AI quality assurance plan. Be aware that some human-AI collaboration will also be necessary to perform oversight to monitor for any AI-generated risks or shortcomings in your security stack.

Promote more human-AI collaboration instead of replacement.

The narrative around AI replacing jobs isn’t unwarranted. As technology shifts to take over more routine security tasks like reviewing security logs for anomalies, monitoring operations, or threat mitigation, there is some fear about it replacing human expertise. However, consider that these tasks are probably better suited for a machine that can process millions of inputs for hours on end with maximum accuracy. The reality is your security expert just got back hours in their day to prioritize high-risk remediation. Ideally, this shift wouldn’t remove the need for experts, but promote and require greater human-AI partnership to guarantee quality assurance and extend the capabilities of experts.

Sadly, the rise of generative AI also means that organizations should brace for an onslaught of AI-enabled attacks. In a matter of months, IT and security teams will be overwhelmed with synthetic ID fraud courtesy of deepfakes; more convincing and personalized phishing emails, text messages, and even voice mail messages. Soon we can expect polymorphic malware or craft spam messages that are difficult to detect by antivirus software or spam filters; enhanced password hacks; and the poisoning of data used to train programs. The next cybersecurity milestone will be our ability to quickly identify and successfully counteract AI-enabled attacks- -having the right tools and expertise will be the game changer.

Final Thoughts on Generative AI

Amidst the craze to integrate generative AI across your business, try not to move fast and break things. It’s important to come to terms with the shortcomings of AI and assess how it could compromise your business if used without proper oversight and planning. But there is no need to start from scratch. Leveraging these recommendations to create a solid security strategy that works for your business’ security priorities, will give you the head start needed when integrating generative AI into your security stack.

Download Link to MDM Buyer's Guide

Share This

Related Posts