AI and ML Tools: Alleviating Workforce Burnout Across Cybersecurity
Solutions Review’s Expert Insights Series is a collection of contributed articles written by industry experts in enterprise software categories. José López of Mimecast examines how the adaption of AI and ML tools can help alleviate the strains of workforce burnout in cybersecurity.
It stands to reason that as cyber-attacks rapidly evolve in volume and complexity, so should the human workforce tasked with mitigating risk and combatting email-borne attacks. But unfortunately, a positive correlation between threats and defenders hasn’t existed in several years. The talent shortage in cybersecurity continues to mark a key point of vulnerability. An (ISC)² 2022 Cybersecurity Workforce Study found that the global skills gap increased by 26 percent from 2021 to 2022, with 3.4 million additional employees needed to secure business-critical assets effectively. As such, only one in eight IT leaders believe they have fully resourced teams with ample workers to execute on C-Suite cybersecurity priorities.
To compound the problem, the skills gap’s contributors and consequences are somewhat cyclical in nature. Vacant positions, heavier workloads, and burnout take a toll on current employees while also discouraging prospective security and IT professionals from joining the industry. The possibility of a widening gap looms as many cyber professionals are reaching a breaking point. Mimecast’s 2022 State of Ransomware Readiness Report found that one-third of cyber employees are considering leaving their role in the next two years due to stress or burnout.
While there isn’t a one-size-fits-all approach for alleviating cybersecurity’s multi-faceted skills shortage, the integrated adoption of artificial intelligence (AI) and machine learning (ML) tools can help organizations tighten the gap. Leveraging AI and ML security tools enables them to offset critical workforce challenges by automating repetitive tasks, streamlining human workflows, and driving higher levels of operational efficiency– allowing strained security teams to do more with less.
Widget not in any sidebars
Increased Speed, Accuracy and Threat Detection with Automation
The positive impact of AI and ML technology is clear: Mimecast’s 2022 State of Email Security Report found that more than half of companies leveraging AI and ML experienced increased accuracy of their threat detection. IBM’s 2022 Cost of a Data Breach Report found that organizations that had a fully deployed AI and automation program were able to identify and contain a breach 28 days faster than those that didn’t, saving them an average of $3.05 million in costs.
As a result of the technology’s efficacy, enterprise spending on AI-powered cybersecurity is expected to grow at a compound annual growth rate of 27 percent through 2027, reaching a total market value of $46 billion. The specific value of AI and ML tools for thinly stretched security teams is varied. AI is able to process, analyze, and classify large amounts data quickly– achieving a deeper level of actionable threat intelligence that would be otherwise impossible. This enhances response efficiency, productivity, and scale for leaner teams, thus freeing up time for them to focus on high-level responsibilities that have a more direct impact on risk mitigation.
An AI and automation study conducted by IBM Institute for Business Value found the following five applications to have the greatest impact on organizations’ cybersecurity operations:
- Triage of Tier 1 threats
- Detection of zero-day attacks and threats
- Prediction of future threats
- Reduction of false positives and noise
- Correlation of user behavior with threat indicators
And that’s just a small sample size. AI and ML can also be utilized for threat simulations, data lifecycle management, endpoint discovery and asset management, and more. When coupled with natural language processing tools such as autoencoders, language models, or more classical classifier methods like Random Forest, AI and ML can also help detect anomalies in the writing style and communication patterns of inbound emails, blocking messages and alerting employees accordingly.
Assessing Both Sides of the Dividing Line
AI as an emerging solution is not without its nuances. Generally speaking, well-designed AI systems are not set-it-and-forget-it models. The human element is fundamental in testing and monitoring AI, which, although a remedy for humanity’s robots-are-taking-our-jobs doom, raises its unique challenges. Although AI can greatly augment human labor, the systems still require human oversight. Far less oversight than legacy systems, yes, but still employee participation that requires a certain level of training and upskilling.
Which brings us back to our original problem: SambaNova research found that while just over half (59 percent) of IT leaders had the budget to hire additional resources for AI teams, 82 percent found hiring to be a challenge. This means that overwhelmed CISOs and security teams will need to be smart in seeking out AI-powered security vendors.
When introducing AI, they should consider the following:
- Measurable business impact: How will the technology deliver ROI not just in security initiatives, but larger organizational objectives?
- Consolidation: Will the systems help lessen complexity, consolidate tech stacks, and streamline responsibilities for employees?
- Plausibility: Is it feasible to implement these systems successfully given limited headcount and/or resources?
The goal of AI and ML adoption should be to drive simplicity and ease of use, not introduce further complexity. For AI-enabled tools and labor shortages to work hand-in-hand, security teams will need to keep that goal in mind. Under the right circumstances, these emerging technologies can take a large weight off of employee shoulders, helping reduce burnout and churn while driving stronger threat monitoring and prevention across the board.
Widget not in any sidebars