This is the single most important question under the EU AI Act: is your AI system high-risk?
If the answer is yes, you must have a full compliance package — risk management, technical documentation, conformity assessment, human oversight, post-market monitoring — ready by August 2, 2026. If the answer is no, your obligations are much lighter.
The problem is that the classification framework is not always straightforward. The European Commission was legally required to publish practical guidelines on high-risk classification by February 2, 2026. It missed that deadline. Final guidance is now expected in March or April 2026, leaving companies to navigate the regulation on their own.
Here is how the classification actually works.
Two pathways to high-risk
Article 6 of the AI Act defines two independent pathways. Your system is high-risk if it falls into either one.
Pathway 1: Product safety (Annex I)
If your AI system is a safety component of a product — or is itself a product — covered by existing EU product safety legislation (medical devices, machinery, toys, vehicles, lifts, aviation), and that product requires third-party conformity assessment, the AI system is automatically high-risk.
This pathway applies starting August 2, 2027 (one year later than Annex III systems).
Pathway 2: Standalone high-risk (Annex III)
This is the pathway most AI companies need to worry about now. Annex III lists eight domains. If your AI system is intended for use in any of these areas, it is presumed high-risk:
| Domain | Examples |
|---|---|
| 1. Biometrics | Remote biometric identification, biometric categorization, emotion recognition |
| 2. Critical infrastructure | AI managing energy grids, water supply, transport systems, digital networks |
| 3. Education | Admissions decisions, grading, proctoring, learning assessment |
| 4. Employment | CV screening, candidate ranking, task allocation, performance monitoring, termination decisions |
| 5. Essential services | Credit scoring, insurance pricing, social benefit eligibility, emergency call triage |
| 6. Law enforcement | Evidence reliability assessment, risk profiling, recidivism prediction |
| 7. Migration and borders | Asylum application assessment, risk assessment at borders, irregular migration detection |
| 8. Justice and democracy | AI assisting judges in interpreting law, tools intended to influence elections |
The classification is based on intended purpose, not on technical capability. The same NLP model can be minimal-risk or high-risk depending on what you use it for.
The exception: Article 6(3)
Not every AI system that touches an Annex III domain is automatically high-risk. Article 6(3) provides an escape route: a system is not high-risk if it does not pose a significant risk of harm and does not materially influence decision-making outcomes.
To qualify, your system must meet at least one of these conditions:
- Narrow procedural task — it sorts documents, converts formats, or organizes data without evaluating individuals (e.g., extracting contact details from CVs without ranking candidates)
- Improves a completed human activity — it refines or polishes work already done by a person (e.g., rewriting text for clarity or tone)
- Detects patterns for human review — it flags anomalies without taking action on its own (e.g., flagging grading outliers for a teacher to review)
- Preparatory task — it handles a supporting function like translation or file management for an assessment that a human will make
But there is a hard override: if your system performs profiling of natural persons — meaning it evaluates aspects of an individual like work performance, economic situation, health, preferences, reliability, behavior, or location — it is always high-risk, regardless of whether it meets the conditions above.
This is the line that trips up most companies. A CV parser that extracts graduation dates is not high-risk. The moment it ranks candidates or recommends who to interview, it is profiling, and it is high-risk.
You must document the exception
If you claim the Article 6(3) exception, you are required under Article 6(4) to prepare a written assessment explaining why your system qualifies — before placing it on the market. National authorities can request this documentation at any time. You must also register the system under Article 49(2).
Claiming the exception without documentation is itself a compliance violation.
Real examples
Clearly high-risk:
- A hiring platform that scores and ranks job applicants
- A credit scoring algorithm used by a bank for loan decisions
- A medical imaging AI that recommends diagnoses
- An AI system that evaluates student exam performance
- A fraud detection system that freezes accounts automatically
- An AI tool assisting judges in legal research and case analysis
Not high-risk (but often misunderstood):
- An AI transcription tool that converts doctor voice notes to text (narrow procedural task)
- A spam filter (narrow procedural task, no profiling)
- A CV parser that extracts contact details without ranking (narrow procedural task)
- An internal analytics dashboard informing business strategy without individual-level decisions
- An AI grammar checker (improves completed human activity)
Grey areas that need careful analysis:
- A chatbot providing customer service — not high-risk unless it makes decisions about access to essential services
- An AI flagging drug interactions — depends on whether it merely alerts a pharmacist or actually blocks prescriptions
- A maintenance prediction system — depends on whether it directly controls safety-critical functions
Common classification mistakes
Having worked with the regulation extensively, these are the errors I see most often:
-
Classifying too late. Classification should happen during system design, not after development. Retrofitting compliance into a finished system is far more expensive and may require architectural changes.
-
Underestimating what counts as profiling. Any evaluation of individual characteristics — performance scoring, behavioral prediction, personality assessment — triggers the profiling override. This catches more systems than people expect.
-
Assuming geography protects you. The AI Act applies based on where the system is used, not where it was built. A US company selling an AI hiring tool to EU companies is in scope.
-
Failing to document the exception. Even if your system genuinely qualifies under Article 6(3), you need the written assessment before market placement. No documentation means non-compliance.
-
Treating classification as one-time. Classification must be reassessed when intended purpose changes, when significant modifications are made, or when the Commission updates Annex III (which it can do via delegated acts under Article 7).
-
Ignoring deployer-side reclassification. You may supply a system as non-high-risk, but if a customer uses it for an Annex III purpose, someone needs to assume provider obligations.
What about the Digital Omnibus?
The Digital Omnibus proposal could potentially delay enforcement of high-risk obligations if harmonized standards are not ready by August 2026. But the classification itself does not change — your system is either high-risk or it is not. The Omnibus only affects when you must be fully compliant, not whether you need to comply.
One notable change in the proposal: providers claiming the Article 6(3) exception would no longer need to register those systems in the EU database. The EDPB has advised against this change, arguing it would undermine accountability.
Conformity assessment: self-assessment or third-party?
Good news for most companies: the default for Annex III systems is self-assessment (internal control under Annex VI). You conduct your own assessment, sign an EU declaration of conformity, and apply the CE marking.
Third-party assessment by a notified body is required only in two cases:
- Your system performs remote biometric identification (always requires third-party)
- You have not applied the relevant harmonized standards or common specifications
Since harmonized standards are still being developed (the standardization bodies missed their 2025 deadlines), this second condition may affect more companies than expected.
What to do now
-
Classify your system. Go through the Annex III domains and determine whether your intended purpose falls within scope. If you are not sure, Annexa's free risk triage tool can walk you through the classification in minutes — no signup required.
-
If you claim Article 6(3), document it. Write the assessment now. Explain which condition your system meets and why profiling does not apply. This documentation must exist before you place the system on the market.
-
If you are high-risk, start your Annex IV documentation. The technical documentation requirements cover seven sections: system description, development methodology, data governance, testing, performance metrics, human oversight, and monitoring. Starting now gives you six months — which is tight but achievable.
-
Do not wait for the Commission's guidelines. They are late and may not arrive until April 2026. The regulation text and Annex III are clear enough to classify most systems today.
Spain's AESIA has already published 16 guidance documents covering conformity assessment, risk management, data governance, and technical documentation. If you need practical guidance while waiting for the Commission, start there.