The EU AI Act does not treat every organisation in the AI value chain the same way. Your obligations depend on your role — specifically, whether you are a provider or a deployer of an AI system. Getting this distinction wrong means either doing far more compliance work than necessary, or far too little.
This post explains the two roles, what each one must do, and the scenarios where the line between them gets blurry.
Definitions: what the regulation actually says
Article 3 of Regulation (EU) 2024/1689 defines both roles:
Provider (Article 3(3)): A natural or legal person that develops an AI system or a general-purpose AI model, or that has an AI system or general-purpose AI model developed, and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
Deployer (Article 3(4)): A natural or legal person that uses an AI system under its authority, except where the AI system is used in the course of a personal, non-professional activity.
In plain terms: if you built it (or had it built and put your name on it), you are a provider. If you use it in a professional context, you are a deployer.
Why the distinction matters
The obligations are dramatically different. Providers carry the bulk of the compliance burden for high-risk AI systems. Deployers have meaningful responsibilities too, but they are lighter and focused on proper use rather than system design.
If you are both — for instance, you built an AI system and also use it internally — you carry both sets of obligations.
Provider obligations (Articles 16–22)
Providers of high-risk AI systems must meet all of the following before placing the system on the EU market:
Design and development requirements:
- Implement a risk management system that runs throughout the system's lifecycle (Article 9)
- Meet data governance standards for training, validation, and testing data (Article 10)
- Produce detailed technical documentation per Annex IV before the system is placed on the market (Article 11)
- Design the system for automatic logging of relevant events (Article 12)
- Ensure transparency — provide clear instructions for use to deployers (Article 13)
- Build in human oversight mechanisms (Article 14)
- Meet standards for accuracy, robustness, and cybersecurity (Article 15)
Quality and administrative requirements:
- Implement a quality management system (Article 17)
- Conduct a conformity assessment (Article 43) — most Annex III systems can use internal control (Annex VI), but certain biometric systems require third-party assessment
- Draw up an EU declaration of conformity (Article 47)
- Affix the CE marking (Article 48)
- Register the system in the EU database before market placement (Article 49)
Ongoing obligations:
- Operate a post-market monitoring system (Article 72)
- Report serious incidents to national authorities (Article 73)
- Take corrective action if the system no longer complies (Article 20)
This is a substantial list. The technical documentation alone — covering system architecture, training data, testing results, risk management, and more — is typically the most time-consuming requirement. Our compliance checklist breaks down each item.
Deployer obligations (Article 26)
Deployers of high-risk AI systems have a shorter but still meaningful set of requirements:
- Use the system according to the provider's instructions for use. This sounds obvious, but it means reading and following the documentation that comes with the system.
- Assign human oversight to natural persons who have the necessary competence, training, and authority to carry out that role effectively.
- Ensure input data is relevant and sufficiently representative for the system's intended purpose, to the extent the deployer exercises control over the input data.
- Monitor the system's operation based on the instructions for use. If you identify a risk or serious incident, inform the provider and the relevant national authority without delay.
- Keep logs automatically generated by the system for at least six months, unless otherwise required by law.
- Conduct a fundamental rights impact assessment (FRIA) before putting the system into use, if you are a public body or a private entity providing public services.
- Inform affected individuals that they are subject to a high-risk AI system, where applicable.
- Cooperate with national authorities and provide access to logs and documentation when requested.
The key difference: deployers are responsible for proper use, while providers are responsible for proper design. A deployer does not need to write technical documentation or conduct a conformity assessment — the provider already did that.
When a deployer becomes a provider
This is where it gets complicated. Article 25 specifies that a deployer is reclassified as a provider — and takes on the full provider obligations — in any of these situations:
- You put your name or trademark on a high-risk AI system already on the market. Even if someone else built it, branding it as yours makes you the provider.
- You make a substantial modification to a high-risk AI system. Fine-tuning a model on your own data, changing the intended purpose, or significantly altering the system's behaviour can trigger this.
- You modify the intended purpose of an AI system so that it becomes high-risk when it was not before. For example, using a general-purpose chatbot for employment screening changes its risk classification.
This has major implications for companies using third-party AI services. If you take an API from a model provider, wrap it in your own product, and sell it under your brand for a high-risk use case — you are likely the provider, not a deployer. The fact that someone else trained the model does not shift the compliance burden.
Important nuances on "substantial modification":
- Fine-tuning can trigger reclassification, but there is a threshold: if the fine-tuning uses compute exceeding one-third of the original model's training compute, the modifier becomes the provider of a new model. Below that threshold, you likely remain a deployer.
- RAG (Retrieval-Augmented Generation) does not constitute a substantial modification. Adding domain-specific knowledge via retrieval without altering the model architecture or weights keeps you as a deployer.
- The percentage of code changed is irrelevant — what matters is the impact on functionality and compliance posture.
Real-world examples
Scenario 1: SaaS company using a third-party AI API A fintech startup integrates an AI credit scoring model from a third-party vendor into its lending platform. The startup sells the lending platform under its own brand.
Role: The startup is the provider — it placed the system on the market under its own name for a high-risk use case (credit scoring falls under Annex III). The API vendor may also be a provider of a general-purpose AI model with its own separate obligations.
Scenario 2: Bank deploying a vendor's HR screening tool A bank purchases an AI-powered CV screening tool from a vendor. The bank uses it as-is, following the vendor's instructions, for internal hiring.
Role: The bank is the deployer. The vendor is the provider. The bank must assign human oversight, ensure input data quality, keep logs, and conduct a fundamental rights impact assessment (it likely qualifies as providing a public-relevant service). But the bank does not need to produce Annex IV documentation — that is the vendor's job.
Scenario 3: Consultancy fine-tuning a model for a client An AI consultancy takes an open-source model, fine-tunes it on a client's medical data, and delivers a diagnostic support tool under the client's brand.
Role: The client is the provider (their name is on it). But the consultancy may also have provider-like responsibilities depending on the contractual arrangement. This is a grey area — the safest approach is to define provider/deployer responsibilities explicitly in the contract.
What about general-purpose AI models?
The EU AI Act introduces a separate category for general-purpose AI (GPAI) model providers (Articles 51–56). Companies like OpenAI, Google, Meta, and Mistral fall here if they make foundation models available.
GPAI providers have their own obligations — primarily around technical documentation, transparency, and copyright compliance. But these are separate from the provider/deployer framework for high-risk AI systems.
If you build a high-risk application on top of a GPAI model, you are the provider of the high-risk system. The GPAI provider has its own obligations, but they do not substitute for yours.
Recent developments
On March 13, 2026, the Council of the EU agreed a negotiating position on the "Digital Omnibus" proposal, which could extend the deadline for high-risk AI system rules by up to 16 months — potentially pushing standalone high-risk system obligations to December 2027. However, this is a negotiating position, not final legislation. The European Parliament still needs to agree, and the timeline for reaching agreement is uncertain.
Until a formal amendment is adopted, the August 2, 2026 deadline stands. Companies should not pause their compliance efforts based on a proposal that may or may not become law.
How to determine your role
Ask these questions in order:
- Did you develop the AI system (or have it developed)?
- Yes → you are likely a provider
- Is the system placed on the market or put into service under your name or brand?
- Yes → you are a provider, even if someone else built it
- Did you substantially modify a high-risk AI system?
- Yes → you are reclassified as a provider (Article 25)
- Did you change the intended purpose of an AI system so that it becomes high-risk?
- Yes → you are reclassified as a provider (Article 25)
- None of the above — you use the system under your authority as provided by the vendor?
- You are a deployer
If you are unsure whether your AI system is high-risk in the first place, Annexa's free risk triage can classify it in minutes — and help you determine whether you are looking at provider or deployer obligations.
What to do now
Whether you are a provider or a deployer, the time to act is now. The compliance checklist covers every provider obligation step by step. Deployers should start by auditing which AI systems they use, confirming that their vendors are handling provider obligations, and setting up human oversight and logging processes.
The companies that get this right will know exactly which hat they are wearing — and prepare accordingly.