The EU AI Act does not rely on goodwill. It backs its requirements with some of the largest fines in European regulatory history — larger even than the GDPR.
If your company provides or deploys AI systems in the EU and you are not working on compliance, this is what you are risking.
The three penalty tiers
Article 99 of the AI Act establishes a tiered fine structure based on the severity of the violation:
Tier 1: Prohibited practices — up to €35 million or 7% of global turnover
The most severe penalties apply to violations of Article 5, which bans certain AI practices outright. These include:
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces for law enforcement (with limited exceptions)
- Subliminal manipulation techniques that cause harm
- Exploitation of vulnerabilities of specific groups (age, disability, social or economic situation)
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
- Emotion recognition in workplaces and educational institutions
- Biometric categorization based on sensitive attributes (race, political opinions, sexual orientation)
- Predictive policing based solely on profiling
These prohibitions have been in force since February 2, 2025. If you are running any of these systems in the EU today, you are already in violation.
The fine is whichever is higher: €35 million or 7% of the company's total worldwide annual turnover from the preceding financial year. For a company with €1 billion in global revenue, that is up to €70 million.
Tier 2: Non-compliance with AI Act requirements — up to €15 million or 3% of global turnover
This tier covers most of the obligations that matter to AI providers and deployers:
- Failing to comply with high-risk AI system requirements (risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, cybersecurity)
- Not completing the required conformity assessment before placing a high-risk system on the market
- Failing to register high-risk AI systems in the EU database
- Not implementing a quality management system
- Failing to meet transparency obligations for limited-risk systems (chatbots, deepfakes, emotion recognition)
- Non-compliance with obligations for general-purpose AI models (technical documentation, copyright compliance, transparency)
This is the tier that will affect the most companies. If you have a high-risk AI system and you have not produced Annex IV technical documentation, you are exposed to fines in this category.
Tier 3: Incorrect information — up to €7.5 million or 1% of global turnover
Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities — whether in response to a request or as part of a conformity assessment — triggers this tier.
This is not just about lying. Submitting a conformity assessment with gaps you should have caught, or providing incomplete technical documentation, could fall under this provision.
The SME exception
The AI Act includes a meaningful protection for small and medium-sized enterprises, including startups.
For large companies, the fine is whichever is higher between the fixed amount (€35M/€15M/€7.5M) and the percentage of turnover (7%/3%/1%). For SMEs and startups, it is whichever is lower.
This means a startup with €2 million in annual revenue faces a maximum Tier 1 fine of €140,000 (7% of turnover) rather than €35 million. Still significant, but not company-ending in the same way.
The Act also mandates that penalties be "effective, proportionate, and dissuasive" — meaning regulators must consider the size and resources of the company when setting fines.
Who enforces the fines
Enforcement is split between EU-level and national authorities:
- The European AI Office (operational since August 2025) has exclusive jurisdiction over general-purpose AI model obligations. It can investigate and fine GPAI providers directly.
- National market surveillance authorities handle enforcement for all other AI system obligations. Each EU member state must designate at least one such authority.
- National data protection authorities may also be involved where AI systems process personal data, creating overlap with GDPR enforcement.
As of March 2026, most member states have designated or are in the process of designating their national competent authorities. Spain has designated the Agencia Española de Supervisión de la Inteligencia Artificial (AESIA) as its national authority — one of the first dedicated AI supervision agencies in Europe.
What triggers enforcement
Regulators are unlikely to chase every company on day one. Based on how GDPR enforcement unfolded, expect this pattern:
-
Complaints-driven investigations. A competitor, customer, or affected person files a complaint about your AI system. The regulator investigates.
-
Market surveillance sweeps. Authorities conduct sector-wide checks on high-risk domains — hiring tools, credit scoring, biometric systems. If your system is in one of these categories, you may be audited.
-
High-profile incidents. An AI system causes harm (wrongful denial of benefits, discriminatory hiring decisions, safety failures). The regulator examines whether compliance obligations were met.
-
Failure to register. High-risk AI systems must be registered in the EU database before being placed on the market. Non-registration is easy to detect and easy to enforce.
-
Whistleblowers. The AI Act explicitly protects whistleblowers who report violations (Article 87), creating a mechanism for insiders to flag non-compliance.
The European AI Office has already begun informal compliance reviews of technical documentation from major GPAI providers. The formal enforcement powers for high-risk AI systems activate on August 2, 2026.
Beyond fines: other consequences
Financial penalties are not the only risk. Non-compliance can also lead to:
- Market withdrawal orders. Authorities can require you to remove your AI system from the EU market entirely.
- Product recalls. For AI systems embedded in products (medical devices, vehicles, machinery), regulators can order recalls.
- Public naming. The AI Act allows authorities to publish details of non-compliance, creating reputational damage.
- Contract losses. EU companies are increasingly adding AI Act compliance to procurement requirements. If you cannot demonstrate compliance, you lose the deal.
- Insurance implications. As the AI liability framework develops, non-compliance with the AI Act may affect your ability to obtain or maintain insurance coverage.
Comparison with other regulations
| Regulation | Maximum Fine | Percentage of Turnover |
|---|---|---|
| EU AI Act (prohibited practices) | €35 million | 7% |
| EU AI Act (other violations) | €15 million | 3% |
| GDPR | €20 million | 4% |
| Digital Services Act | N/A | 6% |
| Digital Markets Act | N/A | 10% |
The AI Act's Tier 1 fines exceed GDPR's maximum, making it the second-highest penalty regime in EU digital regulation after the Digital Markets Act.
The GDPR enforcement pattern
When the GDPR took effect in May 2018, there was a similar period of uncertainty. Would regulators actually enforce? The answer came gradually, then all at once:
- 2018-2019: Mostly warnings and small fines while companies scrambled to comply
- 2020: France fined Google €100 million. Italy fined Clearview AI €20 million.
- 2021-2023: Amazon received a €746 million fine (Luxembourg). Meta was fined €1.2 billion (Ireland). Billions in total fines across Europe.
The AI Act is likely to follow a similar curve. The early period after August 2026 will see regulatory guidance and smaller actions. But once the machinery is running, the fines will be substantial — and the AI Act's maximums are higher than the GDPR's.
What to do now
The August 2, 2026 deadline is less than five months away. If you have not started compliance work:
-
Classify your AI systems. Determine whether they fall under the high-risk category. If they do, you have significant documentation and compliance obligations. Annexa's free risk triage can classify your system in minutes.
-
Check for prohibited practices. If any of your AI systems fall under Article 5 prohibitions, stop using them immediately. This obligation is already enforceable.
-
Start technical documentation. Annex IV documentation is extensive and cannot be produced overnight. Begin now.
-
Register high-risk systems. The EU database for high-risk AI systems will require registration before market placement.
-
Establish a quality management system. Article 17 requires a documented QMS covering risk management, data governance, post-market monitoring, and incident reporting.
-
Budget for compliance. Whether you use tools, consultants, or internal resources, compliance requires investment. The cost of compliance is a fraction of the cost of a fine.
The companies that will fare best are the ones that start now, document their good-faith efforts, and build compliance into their development process — not the ones that gamble on regulators being slow.