If your company is based in the United States, the United Kingdom, Israel, or anywhere else outside the EU, you might assume the EU AI Act is not your problem. You would be wrong.

The EU AI Act has extraterritorial scope. Like the GDPR before it, it reaches beyond EU borders — and in some ways, it reaches further.

Who exactly is covered

Article 2 of the AI Act defines its territorial scope. Three categories capture non-EU companies:

  1. Providers placing AI systems on the EU market or putting them into service in the EU — regardless of where the provider is established. If you sell, offer, or make an AI product available to EU customers, you are covered.

  2. Deployers located in the EU. If a US company has an office in Berlin that uses an AI system internally, that counts as "putting into service" in the Union.

  3. Third-country providers and deployers whose AI system output is used in the Union. This is the broadest trigger. Even if you never directly sell into the EU, if the output of your AI system ends up being used there, the Act applies.

That third category is remarkably wide. A US company running an AI-powered analytics platform could be covered if an EU-based client uses the results. An Israeli hiring tool whose assessments are applied to candidates in the EU is covered. A UK fraud detection system whose scores are used by an EU bank is covered.

Broader than the GDPR

Companies familiar with GDPR's extraterritorial reach might assume the AI Act works similarly. It does, but with a lower threshold.

GDPR applies to non-EU companies when they actively target EU individuals — offering goods or services to them, or monitoring their behavior. There is an intent requirement.

The AI Act's Article 2(1)(c) triggers when AI system output is used in the Union. The word "used" sets a lower bar than "targeting." A company whose AI output reaches the EU through a chain of intermediaries — without the company ever intending to serve the EU market — could theoretically be caught.

Aspect GDPR EU AI Act
Trigger Actively targeting or monitoring EU individuals AI output is "used" in the Union
Intent required Yes — must offer goods/services to or monitor EU residents Debatable — Recital 22 suggests intent, but the text is broader
Max fines 4% of global turnover or €20M 7% of global turnover or €35M
Representative required Yes (Article 27) Yes (Articles 22 and 54)

The fines are also higher. While GDPR caps at 4% of global annual turnover, the AI Act goes up to 7% or €35 million for the most serious violations (prohibited practices). Other violations can trigger fines of up to 3% of global turnover or €15 million.

The authorized representative requirement

Non-EU providers cannot simply comply from afar. The AI Act requires appointing an authorized representative established in the EU.

For high-risk AI systems (Article 22): Before making a high-risk AI system available on the EU market, non-EU providers must appoint an authorized representative who can verify technical documentation, keep records for 10 years, provide information to regulators, and cooperate with market surveillance authorities. The representative can be contacted instead of the provider by EU authorities.

For general-purpose AI models (Article 54): Non-EU providers of GPAI models must also appoint an EU-based authorized representative. This obligation has been in effect since August 2, 2025. The representative must verify technical documentation (Annex XI), maintain records, and cooperate with the AI Office.

The representative is not just a mailbox. If the provider acts contrary to the Regulation, the representative must terminate the mandate and inform the relevant authority. This creates real accountability.

What the big players are doing

The major US AI companies are already responding:

  • Microsoft has dedicated compliance working groups combining AI governance, engineering, legal, and public policy teams. They have published EU AI Act compliance information on their Trust Center.
  • OpenAI, Google, Amazon, and Anthropic all signed the EU's GPAI Code of Practice in August 2025, with 26 signatories total.
  • Meta notably refused to sign the GPAI Code of Practice in July 2025, with their Chief Global Affairs Officer stating it "introduces legal uncertainties for model developers." Meta must now demonstrate compliance through alternative means, which exposes it to greater regulatory scrutiny.
  • xAI signed only the Safety and Security chapter, indicating it will use alternative methods for transparency and copyright obligations.

These companies are spending significant resources on compliance. Smaller non-EU companies serving EU customers need to assess their obligations too — the Act does not only apply to tech giants.

Exemptions worth knowing about

Not every AI system from a non-EU company triggers the full weight of the Act:

  • Military, defense, and national security uses are fully exempt
  • Open-source AI models are exempt from most obligations unless they are high-risk systems or pose systemic risk (though they must still comply with EU copyright law)
  • Scientific research and development before market placement is exempt (but real-world testing is not)
  • Personal, non-professional use by individuals is exempt
  • SMEs and startups benefit from proportional penalties and access to regulatory sandboxes for testing

The Digital Omnibus wrinkle

The European Commission's Digital Omnibus proposal (November 2025) could change the timeline. If adopted, it would push the high-risk AI system compliance deadline from August 2, 2026 to December 2, 2027 for Annex III systems, and to August 2, 2028 for Annex I systems.

However, the Omnibus is still working through the legislative process. Companies that assume the extension will pass and delay compliance are taking a significant gamble. If the proposal stalls or is modified, the August 2026 deadline stands.

The enforcement picture

As of March 2026, no public penalties have been announced under the AI Act. But the enforcement infrastructure is in place:

  • The European AI Office holds exclusive authority over GPAI model obligations and has been operational since August 2025
  • It is conducting informal compliance audits of technical documentation from major providers
  • From August 2, 2026, the Commission gains full power to impose fines
  • National market surveillance authorities handle enforcement for other AI system obligations

The pattern should look familiar. The GDPR had a similar quiet period before enforcement ramped up — and then came the billion-euro fines.

What to do if you are outside the EU

  1. Determine if the Act applies to you. If your AI system is used by anyone in the EU, or if its outputs reach the EU, you are likely covered. Annexa's free risk triage can classify your AI system and clarify your obligations in minutes.

  2. Identify your role. Are you a provider (you built the AI system), a deployer (you use it), or both? Your obligations differ significantly based on your role.

  3. Appoint an authorized representative if you are a provider of high-risk AI systems or GPAI models. This is not optional — it is a prerequisite for market access.

  4. Classify your AI systems by risk level. Is your AI system high-risk? The answer determines whether you need full Annex IV technical documentation, conformity assessment, and post-market monitoring.

  5. Start technical documentation now. If your system is high-risk, Annex IV documentation is extensive. Waiting until the deadline is not a viable strategy — this is months of work for complex systems.

  6. Watch the Digital Omnibus. It may buy you time, but do not count on it. Begin compliance work now and adjust the timeline if the extension passes.

The GDPR became a de facto global standard because ignoring it was not an option for any company serving EU customers. The EU AI Act is following the same trajectory — with higher fines and broader reach. The August 2026 deadline does not care where your headquarters are.