Generative AI and the Workplace
Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Gil Pekelman of Atera shines a light on the shadow IT created by generative AI in the workplace and the importance of safely adopting it.
It is abundantly clear to anyone reading the news or working an office job that 2023 is the year of generative AI. These generative tools have absolutely revolutionized the way we work, enabling users to get answers instantly, write professional statements in mere seconds, and empower people with much-needed information with the click of a button.
While having access to this new technology has the potential to completely transform how we work, learn, and live, it can also pose significant challenges for companies and their IT departments that must be addressed. In fact, the majority (67 percent) of senior IT leaders are prioritizing generative AI for their business within the next 18 months, with one-third (33 percent) naming it as a top priority. But still, 79 percent of senior IT managers are concerned that these technologies bring the potential for security risks.
As with any new solution implemented in a corporate setting, there is a heavy time investment for the IT professionals assisting with the rollout. Security has always been a delicate dance of balancing user experience with efficacy; if a tool or process is too clunky, people won’t use it. If it’s too basic or “easy to use,” it’s often not secure enough. IT teams need to be able to strike that balance.
Let’s examine some of the impacts of generative AI in the workplace and how IT teams and professionals can implement it safely.
Gen AI Generates New Opportunities for Dangerous Shadow IT
Oftentimes, when new workplace tools offer free models and trend in the news the way ChatGPT has, it can result in employees bypassing the IT procurement processes to use their preferred tools — a phenomenon referred to as shadow IT. Those procurement processes are in place for a reason, as employees downloading and using untested programs can come with significant security risks.
When we look at generative AI tools and ChatGPT in particular, some companies are effectively banning the use of the tool altogether. Samsung and Apple recently restricted employee usage of ChatGPT completely because of the dangers of data leakage. In this case, the companies are worried that the large language model (LLM) that ChatGPT is built on will use data inputted into the system to learn and expand its own capabilities.
But there are more shadow IT issues associated with generative AI than just data leakage. Shadow IT can also leave you more vulnerable to ransomware attacks and data breaches — if your IT teams don’t know that an app is in use in a corporate environment, they can’t properly monitor and secure it. In the current tech ecosystem, IT teams already struggle to handle the mounting number of support tickets, so adding to their to-do list to guarantee that each and every employee only uses approved apps and tools is near impossible.
In order for a company to successfully use generative AI, enhanced security measures must also be enacted in order to protect from new security threats effectively, according to 54 percent of IT professionals. If not rolled out with the proper safeguards in place, these threats can encompass all of the standard cyber threats — ransomware, phishing, IP theft — and more.
To Stay Safe, Properly Evaluate the Generative AI Tool and Its Parent Company
While the security risks of generative AI tools range from data breaches to identity theft to poor security in the AI app itself, there are some actionable, quick steps companies can follow to reduce these risks.
For example, to safeguard against unauthorized access or malicious use of data, it’s important to evaluate any app’s reputation. That being said, don’t just evaluate the app on its own. Look into its parent company, as well as its track record with other tools and services. Only enable employees to use generative AI tools created by companies you know and trust.
So, for example, by utilizing Azure Open AI — as opposed to other unauthorized or potentially unsafe generative AI tools — prompts (inputs), completions (outputs), and training data are not available to other customers or used to improve OpenAI models, but can be used only by you and your company.
Additionally, it’s critical to regularly assess the data privacy policies, encryption methods, and access controls implemented by the software vendors you choose to work with. Regulatory requirements are changing frequently, both globally and domestically, and you could find yourself in hot water if your supply chain isn’t keeping up with the times. When you are assessing potential AI vendors and partners, it is imperative not to be distracted by all of the exciting, shiny new features at your fingertips. Make sure to ask the company representative all about their data privacy and security practices before you get any further than the initial discovery call.
Crafting the Future: Concluding Thoughts on Generative AI in the Workplace
Generative AI has the power to make all of our lives so much easier, but only if done correctly. If you are thinking of implementing generative AI in any capacity at your workplace, promote the responsible and ethical use of these tools among employees, and make sure they understand that their use of them is encouraged — so long as they are vetted and pre-approved.
Generative AI tools like ChatGPT offer immense potential for transforming the way we work, communicate, and innovate. Leaders in the ever-evolving tech landscape have a responsibility to strike a balance between harnessing the benefits of these tools, and mitigating the associated risks they bring along with them.