Ad Image

AI in the SOC: Should You Hire a Bot?

AI in the SOC

AI in the SOC

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Steve Benton of Anomali takes a closer look at AI in the SOC and asks the burning question: “Should you hire a bot?”

The possibility of AI has been inspiring for everyone, and, as a result, we’ve seen a rush by both consumers and enterprises alike to adopt AI-powered tools and gadgets. CISOs have had little time to think about how to best use AI, educate their employees about its benefits and risks or create and implement the proper security guardrails and policies.

As a former CSO for a large global organization, I understand the enormity of the challenge. Yet implementing a complete company ban on the technology is not the answer. Instead of becoming the “Ministry of No”, CISOs need to be the “Ministry of How”, which begins by treating AI as a potential new hire to make sure it is the right fit for your organization.

AI in the SOC: Should You Hire a Bot?


Create a Job Description for Your AI-powered New Hire

Today we are seeing the rise of Hybrid SOCs, where AI-constructed analysts are working alongside human analysts. Unfortunately, we are seeing a lot of organizations create specifications for AI on the fly, which increases the risk and reduces the value of their investment. Instead, think of that AI-powered tool as a new person joining the team and first put together a job description that answers the following questions:

  • What do you expect AI to do?
  • How will it operate?
  • How will AI work with other human analysts and/or other technology?
  • What will it allow my analysts to do better?
  • What skills and experience does it need to have to be effective in the organization?
  • How do you plan to handle privacy?
  • How are you going to train it?
  • How are you going to look after it?
  • If something goes wrong, how will you rebuild it?

Now that you understand the role the technology will play in your SOC, test it. Whenever CISOs recruit human analysts into their SOCs, they often will give them actual technical exercises to perform to prove that they’re going to be an asset in the security operation for that business. Why wouldn’t you do that before adopting AI into your SOC? Once the AI technology is adopted, just like an employee, gather feedback on its performance and identify other training needs they might have. Be sure to put in place an effective feedback loop.

AI Will Not Replace Humans

While AI can help analysts manage and prioritize the alert “merry-go-round” and other tedious tasks, it doesn’t mean you should replace human analysts with racks of machines. AI is just freeing up the human analysts to deal with bigger problems and actually get out ahead of the security threats and make the security posture more dynamic so it can flex based upon the threats that are coming towards the organization. AI has the ability to process vast amounts of information beyond human capacity.

There is no doubt that every SOC is constrained by the limitations of data they can humanly process – the rest – the ‘Dark Data’ – holds the full picture of threats, which means that SOCs are only ‘solving what they can see’. But with advances in big data and AI, actionable visibility into this Dark Data bonded with the latest threat intelligence is possible at machine speed. The insight from this is a game-changer!

Yet no matter how good we may think AI is, it is not a replacement for a human being. A human has the ability to think outside of a box that’s been defined for it, to intuit, and to make a leap that an AI-powered analyst might not make. Keep in mind that the bad guys will continue to be bad guys, i.e. bad humans, and they will be using AI, of course, to assist them in flexing, morphing, and modifying their attacks. Yet our adversaries will always have humans involved in part of their offense, so we should not disadvantage ourselves by taking humans out of the equation on the defensive team.

AI should never be in a position to unilaterally affect operations, especially those that involve other human beings, and potentially their safety. It needs to be used alongside humans, and humans need to be involved in what’s happening, including any key decisions that it is proposing, unless you have completely satisfied yourselves that there is no threat to the organization or any of its employees or customers.

The Future of AI in the SOC 

It is still early days for completely understanding all the possible use cases of AI in the SOC. I expect we will learn more as we continue to see greater collaboration among AI technology providers, security practitioners, and customers, who are using the tools to defend against persistent, fast-changing adversaries. I’m optimistic – I already see a lot of positivity in how AI is earning its place in the SOC and becoming more applicable and trustable.

Share This

Related Posts