Ad Image

All You Need to Know about the new EU AI Act from A-Z

BigID’s Verrion Wright offers a primer on all you need to know about the new EU AI Act from A-Z. This article originally appeared on Solutions Review’s Insight Jam, an enterprise IT community enabling the human conversation on AI.

There’s a new kid on the privacy block. The enforcement of the world’s first Artificial Intelligence Act (AIA) just kicked in this summer. For context, the AIA is a comprehensive framework to combat the risk posed by artificial intelligence (AI) and was first proposed by the EU in 2020. Many are already predicting that, just as GDPR sets a precedent for privacy regulations, this regulation will set the A.I. standard for years to come.

So, what does this new AI Act mean for us and the broader AI community?

What Does the EU AI Act Mean?

As this is the first global powerhouse to pass AI legislation, the EU recognized the fundamental need to ensure the safe and secure development of AI systems. The EU AI Act was introduced to mitigate harm in areas where using AI poses significant risks to fundamental rights like healthcare, education, public services, and border surveillance.  This Act provides regulations for general-purpose AI models (GPAI) emphasizing transparency, traceability, non-discriminatory, and environmentally friendliness.

Additionally, the legislation requires tech companies that develop AI technologies to produce a summary of data used for training models, provide reporting on data being trained, and implement regular risk assessments to mitigate risk and comply with the EU AI Act requirements. These additional layers ensure human oversight rather than automated processes to prevent bias, profiling, or dangerous and harmful outcomes.

What Does the EU AI Act Restrict?

The AI law from the European Union’s Council places bans and restrictions on several uses of AI. Here are some practices that are prohibited according to the new legislation:

  • Indiscriminate and untargeted bulk scraping of biometric data like facial images from social media or footage to create or expand face recognition databases is prohibited. This also includes facial and emotion recognition systems in public places like workplaces, border patrol, law enforcement, and educational settings. However, certain safety exceptions exist, such as using AI to detect when a driver is falling asleep. Facial recognition technology by law enforcement is restricted to uses like identification of victims of terrorism, kidnapping, and human trafficking.

  • Using social scoring systems by public authorities to evaluate citizens’ compliance is also prohibited. This is because it may lead to discriminatory outcomes, injustice, and exclusion of specific groups. AI systems manipulating people’s behavior to influence or guide their decisions are banned. For example, targeted manipulation of content on social media and other platforms to pursue political or commercial goals is prohibited. Any operator of systems creating manipulated media must disclose to users.

  • Additionally, any AI system that assesses natural persons or groups for risk and profiling for delinquency is banned. This prohibits tools that predict the occurrence or recurrence of a misdemeanor or crime based on profiling a person using traits and data like location or past criminal behavior.

  • Foundation model providers must also submit detailed summaries of the training data for building their AI models.

These bans and many others are categorized into different levels of risk in the EU AI Act, which depends on the severity. Let’s take a look at those risk levels:

The EU AI Act follows a risk-based approach to regulation, categorizing AI applications into four levels. This means the higher the risk, the stricter the governance.

  • Unacceptable Risk: An AI system that’s categorized as an “unacceptable risk” poses a threat to us humans. Cognitive manipulation of people’s behaviors, social scoring, and some uses of biometric systems fall under this class. The only exception here is for law enforcement, but even that is capped for specific uses.

  • High Risk: AI systems that affect the safety of humans and our fundamental rights will be considered high risk. This includes credit scoring systems and automated insurance claims. All high-risk systems, such as the advanced GPT-4 model, will be strictly vetted through conformity assessments before they’re put on the market and will be continuously monitored throughout their lifecycle. Companies must also register the product with an EU database.

  • Limited Risk: AI tools like chatbots, deepfakes, and features like personalization are considered “limited risk.” Companies that provide such services must ensure that they are transparent with their customers about what their AI models are being used for and the type of data involved.

  • Minimal Risk: For tools and processes that fall under “minimal risk,” the draft EU AI Act encourages companies to have a code of conduct ensuring AI is being used ethically.

So What Happens When Organizations Aren’t Compliant? 

When the EU introduced GDPR five years ago, the world underwent a seismic shift as organizations scrambled to update their privacy notice, ensuring transparency on how they govern their data and everything in between. Since then, GDPR has accrued over $4 billion in fines. Similarly, the new EU AI Act has severe financial penalties if companies are found to not be compliant.

According to the European Council, the EU AI act will be enforced through each member state’s national competent market surveillance authorities. In addition, an EU AI office, which will be a new ruling body within the EU Commission, will set the standards and enforcement mechanisms for new rules on GPAI models.

There will be financial penalties for violations of the EU AI Act, but several factors, such as the type of AI system, company size, and the extent of the violation, will determine fines. The penalties will range from:

  •  7.5 million euros or 1.5 percent of a company’s gross revenue (whichever is higher), if the information supplied is incorrect.

  • 15 million euros or 3 percent of a company’s gross revenue (whichever is higher) for violating the EU AI acts obligations

  • 35 million euros or 3 percent of a company’s gross revenue (whichever is higher) for violations of banned AI applications.

Additionally, based on negotiations, smaller companies and start-ups may catch a break as the EU AI Act will provide caps on fines to ensure they are proportionate to those organizations.

Strategies to Adopt to Ensure Compliance with EU AI Act 

While the EU AI Act is enforced today, it’s not too late for leaders to take steps to ensure that they are adopting A.I. responsibly, including

  • The biggest risk leaders will face when it comes to AI is the data that is being fed into it. Leaders need to utilize a deep data discovery platform to accurately identify, classify, tag, and label the type of data that AI gets trained on throughout its development life cycle accordingly

  • Organizations should have a proverbial data purgatory, an intermediate stage where IT leaders can check their data for any sensitive information and remove it before feeding the AI with it.

  • The worst time to prepare for any potential disaster is after it happens. Ensuring that the AI model you are using is adopted responsibly requires collaboration and cooperation from across the organization and experts in the field to stay up to date on the latest ins and outs of the workings and weaknesses of AI systems.

  • Organization leaders can continuously tailor their data management and security systems to the latest threats by staying up to date.

  • Assess data risk and quickly provide reporting to EU regulatory authorities.

  • Adopt AI safely by building out policies for data usage across the organization.

Share This

Related Posts