Ad Image

Two Sides of the AI Coin: Balancing Innovation and Business Continuity

business continuity

business continuity

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Lee Waskevich of ePlus Technology notes that adapting AI requires striking a balance between innovation and business continuity. 

There’s no question that artificial intelligence (AI) is radically transforming business and society as we know it, driving unprecedented innovation and unleashing creativity across virtually every sector– from healthcare to retail to manufacturing. Further, the generative AI market, which is expected to demonstrate an annual growth rate of 24.4 percent from 2023 to 2030, is just beginning to scratch the surface of what’s possible.

However, while the new wave of AI continues to make the previously impossible, possible, it is also accompanied by a host of new risks and security challenges. Landing in the right place when it comes to AI starts with striking a delicate balance between accelerating innovation and minimizing threats.

Download Link to Endpoint Security Buyer's Guide

Two Sides of the AI Coin: Balancing Innovation and Business Continuity


Three keys to establishing that balance between innovation and business continuity include:

  1. Don’t overlook the most common security challenges posed by AI technology.

With generative AI, hackers or other bad actors can now create far more sophisticated and intricate versions of common cyber-attacks, such as email phishing, malware, ransomware, and social engineering. With the ability to create perfectly worded, convincing, and realistic emails on a massive scale, the old tactics of identifying email phishing by peculiarities in language or tone are being replaced by a new level of complexity. In fact, according to a Palo Alto Networks report on malware trends, threat actors are increasingly taking advantage of interest in Gen AI programs, driving a 910 percent increase in monthly registrations for domains, both benign and malicious, related to ChatGPT.

For business leaders, it’s essential to go back to the basics. This means putting mechanisms in place to protect against the risk points associated with the use of AI in potential attacks.  Creating a culture of awareness and education at the workplace and training end-users on security standards and protocols is an obvious but critical block-and-tackle measure that can help ensure that you’re not taking two steps forward to take one step back.

  1. Embrace the pros but also take the time to understand (and plan for) the potential cons

Companies are well on their way to understanding the many helpful applications and uses of AI, including maximizing productivity, automating tasks, enhancing customer experiences, developing products more efficiently, and deploying them to market faster.

Likewise, they are also learning how much potential risk is involved and what those risks look like. For example, 11 percent of the data employees input into ChatGPT is currently classified as confidential, increasing the risk of data breach or even identity theft. Further, GenAI projects are linked to data loss or data misuse, especially among sectors that are experimenting with GenAI to eliminate repetitive, time-consuming tasks.

Driving long-term success means first understanding whether and how your company is already utilizing AI to ensure that cybersecurity is embedded across every touchpoint, from the creation of algorithms to training the data. You can’t protect what you can’t identify. Building security protocols into your landscape can help guarantee that you have the right procedures in place to protect customer data, while also safeguarding the company’s most proprietary information from internal and external threats.

  1. Strive for AI alignment across the entire enterprise

A common question for leaders across an organization is: Who owns the AI function? Is it the IT team, the C-suite, the cyber team, or some combination of them all? Many times, it’s the IT team. However, it is not uncommon for organizations in various industries, like defense, healthcare, or manufacturing, to have AI-driven activities that are siloed and led by individual business units or specialty teams that are focused on enabling specific research initiatives or projects.

As a result, the AI function can often become disjointed and siloed across business functions creating gaps in the security fabric and enhancing risk. But with one cybersecurity attack occurring every 39 seconds, there’s no time for siloes. It’s vital for risk managers to understand not only who has the oversight from a cybersecurity perspective, but also who has the responsibility to manage and monitor risk.

On the heels of newly released SEC regulations, the ideal scenario is for companies to already have close alignment between all stakeholders involved in any initiative with AI tools, as well as an understanding of an organization’s overall cybersecurity governance, architecture, and risk management and reporting processes.

Conclusion

Artificial intelligence is an exciting new space that welcomes innovation and opportunity in nearly every workstream and industry. However, it doesn’t come without risk. The advancements that AI stewards demand cybersecurity leaders to find the right balance, one that offers safety alongside innovation.

Download Link to Endpoint Security Buyer's Guide

Share This

Related Posts