The EU AI Act is not a single switch that flips on one day. It is a phased rollout that started in August 2024 and won't be fully complete until August 2027 — with some provisions stretching to 2030.
If you build or deploy AI systems that touch the EU market, here is exactly what happened, what's in force right now, and what is coming next.
What's already in effect
August 1, 2024 — The Act enters into force
The EU AI Act (Regulation (EU) 2024/1689) was published in the Official Journal on July 12, 2024 and entered into force 20 days later. No obligations applied yet — this started the clock on the phased implementation periods.
February 2, 2025 — Prohibited practices and AI literacy
Two categories of obligations kicked in six months after entry into force:
Prohibited AI practices (Article 5) — eight categories of AI systems are now banned outright:
- Subliminal, manipulative, or deceptive techniques that distort behavior
- Exploitation of vulnerabilities based on age, disability, or socioeconomic status
- Social scoring by public or private actors
- Predictive policing based solely on profiling individuals
- Untargeted scraping of internet or CCTV to build facial recognition databases
- Emotion recognition in workplaces and educational institutions
- Biometric categorization to infer protected characteristics (race, political opinions, etc.)
- Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)
Violations of these prohibitions carry the highest fines: up to EUR 35 million or 7% of global annual turnover, whichever is higher.
AI literacy (Article 4) — all providers and deployers of AI systems must ensure their staff have sufficient understanding of AI to use and oversee it responsibly. This applies to every organization using AI, not just high-risk providers.
August 2, 2025 — GPAI obligations and national authorities
Obligations for providers of general-purpose AI models (like foundation models and large language models) took effect. This includes transparency requirements, copyright compliance measures, and systemic risk assessments for the most powerful models.
The GPAI Code of Practice was published on July 10, 2025 to give providers a practical compliance framework.
Member states were also required to designate their national competent authorities by this date. However, 19 of 27 countries had not done so — including Germany, France, Belgium, Italy, and Austria — creating enforcement uncertainty in Europe's largest markets.
What's coming next
August 2, 2026 — The big one
This is the date most organizations should be preparing for. The full requirements for high-risk AI systems under Annex III become enforceable. These cover AI systems used in:
- Biometrics — remote identification, biometric categorization
- Critical infrastructure — energy, transport, water, digital networks
- Education — admissions, grading, proctoring, learning assessment
- Employment — recruitment, screening, task allocation, performance monitoring, termination decisions
- Essential services — credit scoring, insurance pricing, social benefit eligibility
- Law enforcement — evidence assessment, risk profiling
- Migration and border control — risk assessment, document verification
- Justice and democratic processes — legal research tools, case outcome prediction
If your AI system falls into any of these categories, by August 2026 you must have:
- A risk management system (Article 9)
- Data governance practices for training and validation data (Article 10)
- Technical documentation meeting the Annex IV requirements (Article 11)
- Record-keeping and logging capabilities (Article 12)
- Transparency measures and information for deployers (Article 13)
- Human oversight mechanisms (Article 14)
- Accuracy, robustness, and cybersecurity standards (Article 15)
- A completed conformity assessment (Article 43)
- Post-market monitoring and incident reporting systems (Article 72)
Also starting August 2026: transparency obligations under Article 50 apply to all AI systems (not just high-risk). If your system generates deepfakes, interacts with humans, or generates synthetic content, you must disclose that.
The Commission also gains enforcement powers for GPAI obligations, including the ability to levy fines.
August 2, 2027 — Final wave
The last provisions come into force:
- Annex I high-risk systems — AI embedded in products already regulated by EU product safety legislation (machinery, medical devices, toys, vehicles, aviation) must comply
- Providers of GPAI models placed on the market before August 2025 get until this date to fully comply
After August 2027, the EU AI Act is fully applicable across all categories.
August 2, 2030 — Public sector AI
High-risk AI systems intended for use by public authorities that were already in use before August 2026 must be brought into compliance by this date.
The Digital Omnibus: will deadlines change?
On November 19, 2025, the European Commission published the Digital Omnibus proposal — a legislative package that could modify several AI Act timelines.
The key change: a conditional "stop the clock" mechanism for high-risk obligations. If harmonized standards are not ready by August 2026, the Omnibus would allow a targeted delay of up to 16 months for specific categories:
- Annex III systems (standalone high-risk): compliance required 6 months after standards are confirmed, with a long-stop date of December 2, 2027
- Annex I systems (product-embedded): compliance required 12 months after standards are confirmed, with a long-stop date of August 2, 2028
This is not a guaranteed reprieve. It is a contingency plan that still requires companies to demonstrate "good faith" effort toward compliance. And it is not yet law — the proposal must be approved by the European Parliament and Council. EU policymakers are targeting negotiations to conclude before August 2026, but the timeline is tight.
The safe approach: prepare as if August 2026 is the hard deadline, because it might be.
The Commission's missed deadline
The Commission was legally required to publish guidelines on Article 6 (high-risk classification) by February 2, 2026. It missed that deadline. Final adoption of these guidelines is expected in March or April 2026.
This delay has left developers and national regulators without the legal clarity they need, especially in France, Germany, and Spain. If you are uncertain whether your system qualifies as high-risk, you shouldn't wait for official guidance to start preparing.
Fines at a glance
| Violation | Maximum fine | % of turnover |
|---|---|---|
| Prohibited practices (Article 5) | EUR 35 million | 7% |
| High-risk and other obligations | EUR 15 million | 3% |
| Incorrect information to authorities | EUR 7.5 million | 1% |
For SMEs and startups, the lower of the two amounts (fixed EUR amount vs. turnover percentage) applies. For larger organizations, the higher amount applies.
What to do now
If you build or deploy AI in the EU market, here is a practical starting point:
-
Determine your risk classification. Is your system high-risk under Annex III? This is the single most important question. If you're not sure, Annexa's free risk triage tool can help you classify your system in minutes.
-
Start your technical documentation. Annex IV requires documentation across 7 sections covering your system's purpose, architecture, training data, risk management, performance metrics, human oversight, and monitoring plan. Starting early is easier than rushing.
-
Don't wait for the Digital Omnibus. Even if the "stop the clock" mechanism is adopted, you still need to show good faith compliance effort. Starting now reduces risk either way.
-
Track developments. Finland became the first EU country with fully operational enforcement powers in January 2026. Spain's AESIA is operational. Other countries are catching up. Enforcement infrastructure is being built whether you're ready or not.
The window between now and August 2026 is narrow. The organizations that start now will be the ones that aren't scrambling later.