How the EU AI Act Impacts US Businesses
As AI adoption becomes more widespread, concerns persist about its safe and ethical use. In the US, the federal government has not passed a comprehensive law regulating AI. There are a handful of laws at the state level and AI is often captured by profiling or automated decision making at the state privacy law level. Political leaders in Europe have been more proactive with AI regulations, enacting the EU AI Act, a law that affects companies globally. Here is a breakdown of how the EU AI Act impacts US businesses.
Who Does the EU AI Act Apply To?
Much like the GDPR, the European Union AI Act creates compliance risk for organizations based outside of the EU. The law applies to any organization that meets any of the following criteria:
- The business is a “provider” that develops an AI system and places it on the EU market.
- Is a “developer” that uses an AI system that processes data or generates output in the EU.
- The company imports or distributes AI systems in the EU.
Common scenarios that would bring a US company into the scope of the EU AI Act include:
- Foundation-model companies offering chatbots/LLMs to EU users
- SaaS vendors with EU customers
- US companies embedding AI in products sold in the EU
- US employers with EU staff deploying workplace AI
EU AI Act Rules, Requirements, and Risks
US businesses with AI products available on the EU market or to European users are in the scope of the AI Act. Here is a look at the requirements in the law and the risks associated with noncompliance.
AI ACT Risk Classifications
The EU AI Act divides risks into four categories. Each category has its own requirements and risks.
Unacceptable Risk: AI functions that fall into the “Unacceptable Risk” category are prohibited. Examples of prohibited AI systems include those that:
- Deploy subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making.
- Exploit vulnerabilities related to age, disability, or socio-economic circumstances to distort behavior.
- Biometric categorization systems inferring sensitive attributes such as race, political opinions, union membership, religious or philosophical beliefs, sex life, or sexual orientation.
- Evaluate or classify individuals or groups based on social behavior or personal traits.
- Compile facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage
High-risk Systems: AI systems that are considered “High Risk” are subject to the following additional requirements:
- Establish a risk management system
- Conduct data governance to ensure datasets are relevant, sufficiently representative, and free of errors
- Produce technical documentation
- Integrate record-keeping into the AI system
- Provide instructions for use
- Design AI systems to allow human oversight
As a reference, AI use cases that are considered High Risk include:
- Managing critical infrastructure like roads, water, gas, heating, and electricity supply
- Determining admission to educational institutions and for monitoring student behavior
- Recruiting and screening job candidates
- Assessing risk for life and health insurance
- Evaluating creditworthiness
- Criminal profiling
- Assessing health risks
Limited Risk: This risk category applies to AI systems that have a lower risk but still require a level of transparency. Examples include chatbots, AI tools that generate or edit images, text, and audio, image recognition tools, and AI-powered search tools. Requirements for the Limited-risk tier include:
- Clearly indicate when a user is interacting with an AI system
- Tell users when content was produced by AI
- Make transparency information easily accessible
Minimal Risk: This category is for AI systems that pose no significant threat to the rights or safety of the public. These systems are not regulated. Examples include AI-enabled video games and spam filters.
General Purpose AI
The EU AI Act has specific requirements for General Purpose AI (GPAI) models. GPAI models are large, flexible systems that can perform a variety of tasks. Large language models are an example of GPAI.
All GPAI model providers must meet the following requirements:
- Produce technical documentation, including training and testing process and evaluation results.
- Provide downstream providers that intend to integrate the GPAI model into their own AI system with the information needed to understand the GPAI’s capabilities and limitations.
- Establish a policy to comply with the EU Copyright Directive.
- Publish a detailed summary about the data used to train the GPAI model.
Key Dates
The EU AI Act is being rolled out in phases, with much of the law already enforceable.
- February 2nd, 2025 – the ban on unacceptable risk systems took effect
- August 2nd, 2025 – transparency rules for GPAI systems went into effect
- August 2nd, 2026 – High-risk systems must comply with the corresponding requirements
Penalties
Penalty amounts are tiered for EU AI Act violations. Violating the prohibited practices (unacceptable risk) element of the law can result in fines up to €35M (approximately $41M) or 7% of global annual revenue (whichever is higher). Penalties for violations other than prohibited practices can be up to €15M or 3% of revenue. Providing incomplete or misleading information to authorities can result in a fine of up to €7.5M or 1% of revenue.
CompliancePoint can help your organization develop and use AI systems in a manner compliant with the EU AI Act, including through ISO 42001 certification and NIST AI RMF compliance. Reach out to us at connect@compliancepoint.com to learn more about our services.
Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.