
AI Risks and Risk Assessments
Organizations that are developing or using AI solutions expose themselves to unique technology and human-related risks, challenges, and security/privacy concerns. This puts an emphasis on the importance for companies to complete risk assessments, identify risk acceptance measures, and define an approach for responding to risks. However, despite perceived high volumes and complexities of risks, most organizations describe their risk management processes as immature.
We need to fix that!
First, we need to define AI risks so we know what we’re up against. According to NIST AI RMF 1.0, potential risks related to AI solutions fall into three buckets:
Building trustworthy AI systems and using them responsibly can help mitigate these risks, but that doesn’t mean you can skip the risk assessment.
Risk Assessments
Risk assessments should be performed on a regular basis and during any changes to AI solutions, which could be often. The good news is, if the organization has a solid risk assessment process already in place, the AI process can be integrated into that overall process. However, there are some unique considerations that come with assessing AI solutions which include:
- Measuring AI risk and trustworthiness characteristics
- Identifying approaches for tracking AI risks
Risk Acceptance Measures
Risk Appetite and Tolerance are two common measures that are defined within the risk management process. Risk appetite is the amount of risk an organization is willing to take and risk tolerance is the acceptable deviation from the risk appetite.
What makes these unique with AI solutions is that both AI developers and users must define their overall risk appetite and tolerance for AI solution use and these definitions should be driven by business objectives and consider legal and regulatory requirements.
Responding to Risks
In the event that risks become reality, you need a defined approach for responding to risks. In most cases, there are four options:
- Avoid
- Mitigate
- Share/Transfer
- Accept
The organization can choose to avoid the risk altogether, which means that they do not develop or implement any AI solutions. The organization can mitigate the risk by establishing strong AI policies and security controls which lessens the probability and impact of a risk event. The organization can choose to share or transfer the risk, which can occur when there is a shared responsibility agreement with an AI vendor or third-party solution. The organization can also choose to fully accept the risk, which typically happens when the risk is deemed unavoidable.
Conclusion
According to NIST AI RMF 1.0, “AI risk management efforts should consider that humans may assume that AI systems work — and work well — in all settings”. This assumption not only exposes the company to even more risk but also introduces a level of uncertainty.
Are you ready to complete your AI risk assessment?
To get started on your risk assessment journey, check out NIST, “Artificial Intelligence Risk Management Framework”.
Originally published at www.medium.com.
I love working with Internal Audit teams to help them leverage analytics and AI to make their lives easier. If that’s you, let’s chat!