EU's Landmark AI Regulation Set to Transform Industry
The European Union's significant new regulation for artificial intelligence (AI), known as the AI Act, is set to come into force on August 1. This landmark legislation adopts a phased approach and several key deadlines will kick in from now until mid-2026 when the majority of the law will become fully applicable to AI developers.
Regulation Framework and Risk Tiers
The AI Act introduces a risk-based regulatory framework, stipulating different obligations based on the use cases and perceived risks associated with AI applications. Most AI uses, deemed low risk, will not be regulated. However, a limited number of high-risk use cases, such as biometric applications, law enforcement, employment, education, and critical infrastructure, will face rigorous requirements concerning data quality and bias prevention.
For general-purpose AI (GPAI) models like OpenAI's GPT, which powers ChatGPT, transparency mandates will apply. The most powerful GPAIs, typically identified based on computational thresholds, will additionally need to conduct systemic risk assessments. Despite significant resistance from some Member States concerned about the potential impact on Europe's competitive edge against AI giants in the US and China, these obligations remain part of the regulation.
Phased Implementation Timeline
The AI Act will implement its regulations in a phased manner:
Early 2025: Prohibited Uses
Six months post-enforcement, in early 2025, the AI Act will ban certain high-risk applications. Prohibited activities will include social credit scoring systems modeled after China, untargeted facial recognition data scraping, and real-time remote biometric identification by law enforcement in public spaces, barring specific exceptions like searches for missing persons.
April 2025: Codes of Practice
By April 2025, nine months after the law's enforcement, developers of in-scope AI applications must adhere to established codes of practice. The overseeing body, the AI Office, will provide these codes. However, questions remain about who will draft these guidelines. Concerns have arisen over the possibility of AI industry players influencing the rules. Recently, the AI Office announced plans to invite stakeholders to draft these codes following pressures for a more inclusive process.
August 1, 2025: Transparency for GPAIs
A year after the law's enforcement, on August 1, 2025, transparency provisions for GPAI models will begin to apply. Enhancing transparency aims to foster trust and accountability among users and developers of general-purpose AI systems.
2027: High-Risk AI Compliance
High-risk AI systems will have varying compliance deadlines. A subset of these applications will have until 2027, 36 months post-enforcement, to meet their obligations, while others must comply within 24 months.
This progressive rollout allows for a comprehensive adaptation period, ensuring that both developers and regulatory bodies can adequately prepare and adjust to the new legal requirements. The AI Act represents a significant step towards a robust regulatory environment for AI in Europe, aiming to address potential risks while fostering innovation and safeguarding public trust.