Privacy Law
EU AI Act Overview
By Linsey Krolik
The EU Artificial Intelligence Act (AI Act) will be the first comprehensive legislation to regulate AI broadly in Europe and beyond.
On December 9, 2023, the European Parliament and the Council reached a political agreement on the AI Act and on January 26, 2024, the text with analysis of compromise was released. Next steps include the European Parliament’s plenary vote on March 13, followed by publication in the European Journal.
- Broad definition of āAI Systemsā – align to the OECDās definition, (recently updated in anticipation of the AI Act being finalized).
- Broad applicability – The rules will apply to AI systems placed on the EU market or whose use affects people located in the EU. It will apply to providers (e.g. a developer of a resume-screening tool) and deployers (e.g. a bank buying this screening tool) of AI systems.
- Risk-based – The scope of obligations depends on the type of AI system, its purpose and the risk that it presents:
- Unacceptable-risk – Some AI systems are banned. Examples: social scoring systems and some uses of biometric systems (narrow exemptions for law enforcement).
- High-risk – AI systems posing a high risk to safety or fundamental rights will be strictly regulated, requiring risk-mitigation, fundamental human rights impact assessments, data quality, logging of activity, detailed documentation, clear user information, human oversight, robustness, accuracy, cybersecurity, and monitoring. The list of AI systems considered to be high-risk can be updated in the future. Examples: some critical infrastructures, medical devices, education, employment, essential private and public services and benefits, creditworthiness, life and health insurance, administration of justice and democratic processes, and some biometric identification, categorization, and emotion recognition systems.
- Limited-risk – For AI systems with a clear risk of manipulation, specific transparency requirements are imposed. In other words, users should be aware that they are interacting with a machine. Examples: chatbots and deep fakes.
- Minimal-risk – AI systems that present minimal or no risk for peopleās rights or safety will be given a āfree pass.ā Examples: AI-enabled recommender systems and spam filters.
- General purpose AI – Powerful general purpose AI and āfoundational modelsā that pose systemic risks will be subject to obligations in addition to transparency and copyright safeguards, such as risk management, monitoring serious incidents, performing model evaluation and adversarial testing, and providing energy consumption information. Examples include OpenAIās GPT-4 and Google DeepMindās Gemini.
- Fines – Fines for violations of the AI Act will range from ā¬7.5-35m or 1.5-7% of global annual turnover. Caps are available for small to medium-sized businesses (SMEs) and startups.
- Enforcement / Governance Structure – An AI Board and AI Office will be created. There will also be a Scientific Panel of independent experts. Market supervisory authorities will enforce the rules and any individual can make complaints.
The rules will come into force 20 days following publication in the European Journal and different parts will become applicable at different times, between 6 and 36 months after entering into force.