Back to Insights
Glossary
6 min read

EU AI Act: What the First Global AI Law Means for You

The EU AI Act is the world's first comprehensive AI regulation, with extraterritorial reach and penalties up to EUR 35M. Here is what practitioners need to know now.

·Starkguard Team
Share:

EU AI Act: What the First Global AI Law Means for You

The EU AI Act (Regulation 2024/1689) is not a proposal. It is law. Published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024, it is the world's first comprehensive legal framework governing artificial intelligence. If your organization develops, deploys, or imports AI systems that touch the EU market — even if you are headquartered in San Francisco, Singapore, or Sao Paulo — this regulation applies to you.

We have watched organizations cycle through denial, confusion, and panic over the past two years. The ones that acted early are in good shape. The ones still treating this as a future concern are running out of runway.

The Risk-Based Architecture

The EU AI Act does not regulate all AI the same way. It classifies AI systems into four risk tiers, and your obligations scale with the tier. This is its defining structural decision — and it is more nuanced than most summaries suggest.

Unacceptable Risk: Hard Prohibitions

Article 5 bans specific AI practices outright. These include social scoring systems operated by public authorities, real-time remote biometric identification in publicly accessible spaces (with narrow law enforcement exceptions requiring judicial authorization), AI systems that exploit vulnerabilities based on age, disability, or social/economic situation, and systems using subliminal or manipulative techniques that cause significant harm.

These prohibitions took effect on February 2, 2025. There is no grace period, no certification pathway, no exemption process. If your system falls here, shut it down.

High Risk: The Compliance Core

High-risk AI systems bear the heaviest compliance burden. They fall into two groups. Annex II systems are those embedded in products already regulated under EU safety legislation — medical devices, machinery, vehicles, aviation systems. Annex III covers standalone high-risk uses: biometric identification, critical infrastructure management, educational access and assessment, employment and worker management, essential services (credit scoring, insurance, public benefits), law enforcement, migration and border control, and administration of justice.

For these systems, Articles 8 through 15 impose requirements covering risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency provisions (Article 13), human oversight (Article 14), and accuracy, robustness, and cybersecurity (Article 15). Full compliance for Annex III systems is required by August 2, 2026. Annex II systems follow by August 2, 2027.

Limited Risk: Transparency Obligations

AI systems that interact directly with people — chatbots, emotion recognition systems, deepfake generators — must disclose that they are AI-powered. Users have the right to know they are engaging with a machine. These transparency rules are straightforward but non-negotiable.

Minimal Risk: Unregulated

Spam filters, AI-enhanced video games, inventory management systems — the vast majority of AI applications fall here and face no specific obligations under the Act. The regulation is deliberately permissive at this tier to avoid stifling innovation.

The Dates That Matter

The phased enforcement timeline is deliberate, giving organizations time to prepare for progressively more complex requirements:

DateWhat Applies
Feb 2, 2025Prohibited AI practices (Article 5)
Aug 2, 2025GPAI model obligations, governance rules, penalties framework
Aug 2, 2026High-risk AI system requirements (Annex III), enforcement powers
Aug 2, 2027High-risk AI in regulated products (Annex II)

The August 2025 milestone introduced obligations for general-purpose AI (GPAI) model providers — including documentation, transparency, copyright compliance, and systemic risk management for models with high-impact capabilities. If you provide foundation models or large language models in the EU market, these requirements are already in force.

Extraterritorial Reach: The GDPR Playbook

Article 2 establishes scope that extends well beyond the EU's borders. The Act applies to providers placing AI systems on the EU market regardless of where they are established, deployers located within the EU, and providers and deployers located outside the EU where the output of their AI system is used in the EU.

This mirrors the GDPR approach, and it works. Non-EU companies that serve EU customers, process EU residents' data, or deploy AI systems whose outputs affect EU-based individuals must comply. The practical impact: if you have any EU market exposure, treat this regulation as applicable.

Penalties That Demand Attention

The penalty structure under Article 99 is tiered to match the risk classification:

  • Prohibited practices: Up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher
  • Other obligations: Up to EUR 15 million or 3% of global turnover
  • Supplying incorrect information: Up to EUR 7.5 million or 1% of global turnover

For context, 7% of global annual turnover would be approximately $27 billion for the largest tech companies. These are not nominal fines. They are designed to make non-compliance economically irrational, following the enforcement philosophy that made GDPR effective.

SMEs and startups receive proportionate caps, but the regulation explicitly states that penalties must be "effective, proportionate and dissuasive" (Article 99(1)).

What This Means for Organizations Outside Europe

We work with US-based companies that initially dismissed the EU AI Act as irrelevant to their operations. Then they mapped their AI systems and discovered that customer-facing models served EU users, SaaS products embedded AI features used by EU-based clients, and internal HR tools affected EU-based employees.

The Act's extraterritorial application is not theoretical. It is the same mechanism that forced global privacy programs after GDPR. Organizations preparing for AI compliance broadly should treat EU AI Act readiness as a baseline, not an edge case.

The General-Purpose AI Dimension

Chapter V addresses GPAI models separately. All GPAI providers must maintain technical documentation, implement copyright compliance policies, and provide transparency information to downstream deployers. GPAI models classified as having "systemic risk" — currently defined as models trained with compute exceeding 10^25 FLOPs — face additional obligations including adversarial testing, incident reporting, and cybersecurity measures.

This means foundation model providers and companies fine-tuning or deploying large language models face a distinct compliance track that operates alongside the risk-tier framework.

Connecting to Your Governance Program

The EU AI Act does not exist in isolation. It intersects with GDPR (for AI systems processing personal data), sector-specific regulations (medical device regulation, financial services directives), and voluntary frameworks like NIST AI RMF and ISO 42001.

Organizations in regulated industries like healthcare face layered requirements where the AI Act adds to — rather than replaces — existing compliance obligations. Our EU AI Act requirements guide provides a detailed mapping of obligations to operational controls.

The organizations managing this transition successfully are the ones that embedded risk assessment into their development lifecycle early. They classified their systems, identified their obligations, and started building compliance infrastructure before the deadlines arrived. For everyone else, the window is narrowing.


Map your AI systems to EU AI Act risk tiers and track your compliance posture. Start your free trial or request a demo to see the platform in action.

Starkguard Team

AI Governance Experts

Tags:
eu-ai-act
regulation
european-union
ai-compliance

Ready to implement AI governance?

Start your free trial and put these insights into practice with Starkguard.

Start Free Trial

Related Articles