Risks and Challenges of AI: Compliance, Copyright, Ethics & Correctness

Risks and Challenges of AI: Compliance, Copyright, Ethics & Correctness

- by Mark Diamond, Expert in Artificial Intelligence

Emerging AI Compliance

Generative AI’s explosive adoption has been met with a quick response from regulators. Every week governments across the world are proposing restrictions on how and where this new technology can be used. Wanting to become the global standard, European regulators announced restrictions on how AI can use information about individuals, as well as overall safeguards, especially around the use of personal information. In the U.S., states are limiting how companies can use AI to make financial decisions such as loan approvals. (Note that at least one data protection authority in the EU has also created such limits.) The Biden administration created a new standard for safety and security to protect privacy and civil rights. It can be argued that regulators are competing with each other and rushing to develop regulations, in hopes of setting a global regulatory standard. These new regulations are just the beginning, as we expect to see many countries and states developing new rules limiting AI this year, creating a rushed and messy regulatory environment.

Copyright and IP Legal Uncertainty

In addition to new AI regulations, there are significant copyright and intellectual property concerns around AI. There are concerns that some “closed” AI Large Language Models from commercial vendors have been trained with copyrighted data. Furthermore, as these are closed systems, it is not possible to inspect what training data was used. Other generative AI vendors such as Adobe have gone out of their way to ensure that their products are based exclusively on fully licensed training data, even going as far as offering indemnification for copyright infringement claims for the users of their products. Ultimately, companies will have to determine their own level of risk tolerance. In January, the New York Times sued both OpenAI and Microsoft claiming that their generative AI products were based on and violated the Times’ copyrighted information.

The courts are just beginning to address some of these challenges, and we expect it may take years for instructive case law to provide any guidance. Companies deploying AI-assisted applications also need to ensure that these systems do not leverage or potentially expose proprietary, corporate confidential information or trade secrets. In addition to the risk of misusing others IP, AI users also need to be careful about compromising their own IP or other sensitive data. For example, the Economist Magazine reported last year that Samsung employees unintentionally leaked proprietary source code via ChatGPT. An unprotected disclosure of a trade secret to a third party – through an AI assisted application – vitiates the status of the information as a trade secret.

Ethical Use of AI

AI systems are susceptible to the biases from the “training data” used to build the system’s intelligence, risking that they may lead to unethical actions. For example, if an HR application looking for candidates “teaches” an AI system to look for job candidates based on historical hiring profiles that do not reflect a company’s diversity goals, it may have an unintended bias. In this example, if the system is fed predominantly white males as examples to be used as the basis of “ideal” employees, the AI system may inadvertently only recommend white male candidates. In addition to bias, in the rush to deploy AI companies need to ensure they are following their other established ethical practices, including transparency and accountability

AI Correctness, Accuracy and Safety

Finally, “naïve” AI systems want to please and can sometimes generate false information. Generative AI constructs information. There have been some cases recently where AI systems “constructed” fake legal cases, which were submitted to the court. Additionally, AI systems can provide incorrect or unsafe advice. Recently an eating disorder website added a chatbot to answer questions, only to find later that the chatbot was suggesting to potentially anorexic website visitors that they should cut their daily intake by 500 to 1000 calories. It can be argued that these are more of an example of poor AI governance instead of any inherent failure of AI. A legal intern would never be allowed to submit his brief without proper review. The same processes need to be followed when using AI. Despite these risks and concerns, IT and legal departments will face tremendous pressure in 2024 to deploy AI applications. Organizations may lose an advantage sitting on the sidelines. Waiting until the compliance and risk environment becomes better understood will not be an option for many.

Read the full white paper Creating an AI Governance Program on Insight Jam now.