The complete and final text of the EU AI Act, the European Union’s landmark risk-based regulation for artificial intelligence applications, has been officially published in the bloc’s Official Journal. In 20 days, on August 1, this groundbreaking law will come into force, initiating a series of legal deadlines for AI developers that will culminate by mid-2026.
However, the AI Act adopts a phased implementation approach, meaning various deadlines will take effect between now and then, with some extending even further as different legal provisions start to apply.
Political Agreement and Framework
EU lawmakers reached a political agreement on the bloc’s first comprehensive AI rulebook in December of last year. This framework imposes varying obligations on AI developers based on the use cases and perceived risks associated with their technologies. While the majority of AI applications, deemed low-risk, will not face regulation, a small number of potential use cases are banned under the law.
High-risk use cases, including biometric applications of AI and AI used in law enforcement, employment, education, and critical infrastructure, are permitted but come with stringent obligations related to data quality and anti-bias measures. Additionally, a third risk tier imposes lighter transparency requirements for developers of tools like AI chatbots.
For developers of general-purpose AI (GPAI) models, such as OpenAI’s GPT that powers ChatGPT, there are also specific transparency requirements. The most powerful GPAIs, generally determined by computing thresholds, may be required to conduct systemic risk assessments as well.
Industry Lobbying and Implementation Timeline
The AI industry, supported by several Member States’ governments, has lobbied heavily to reduce the obligations on GPAIs, arguing that stringent regulations could stifle Europe’s ability to develop homegrown AI giants capable of competing with the U.S. and China.
Phased Implementation: Key Deadlines
- Prohibited Uses of AI (Early 2025): Six months after the law comes into force, in early 2025, the list of prohibited AI uses will take effect. Banned use cases include China-style social credit scoring, compiling facial recognition databases through untargeted scraping of the internet or CCTV footage, and the use of real-time remote biometrics by law enforcement in public places, with certain exceptions such as searches for missing or abducted persons.
- Codes of Practice (April 2025): Nine months post-entry into force, around April 2025, developers of in-scope AI applications will need to adhere to codes of practice. The EU’s AI Office, tasked with providing these codes, is currently facing scrutiny over who will draft the guidelines. Concerns have arisen that AI industry players might unduly influence the rule-making process. In response, the AI Office will call for expressions of interest to include a diverse range of stakeholders in drafting the codes of practice for GPAIs.
- Transparency Requirements for GPAIs (August 2025): Twelve months after the law comes into force, on August 1, 2025, transparency requirements for GPAIs will begin to apply.
- High-Risk AI Systems Compliance (2027): A subset of high-risk AI systems has been granted a 36-month compliance deadline, giving them until 2027 to meet their obligations. Other high-risk systems must comply within 24 months.
As the EU AI Act’s implementation unfolds, AI developers and stakeholders across Europe will need to navigate these deadlines and obligations to ensure compliance and leverage the opportunities this landmark regulation offers.