Back to Insights
Framework Guide
10 min read

EU AI Act Requirements Guide: Deadlines, Risk Tiers & Fines

EU AI Act compliance guide covering the enforcement timeline, Annex III risk classifications, Article-by-article obligations, and penalties up to 7% turnover.

·Starkguard Team
Share:

The EU AI Act Is Already Enforcing. Here's What You're Late On.

If you're reading this in 2026, you've already missed the first enforcement deadline. Article 5's prohibited AI practices became enforceable on February 2, 2025. GPAI model obligations kicked in on August 2, 2025. The high-risk system requirements — the ones most organizations are scrambling to meet — land on August 2, 2026. That's months away, not years.

Regulation (EU) 2024/1689, published in the Official Journal of the European Union on July 12, 2024, and entered into force on August 1, 2024, is the world's first comprehensive AI-specific legislation. It carries real penalties: up to 35 million euros or 7% of global annual turnover for the worst violations. And its extraterritorial scope means your US or UK headquarters doesn't shield you.

We've spent the past year helping organizations parse this regulation. Here's what compliance officers actually need to know — stripped of the academic commentary and focused on operational impact.

The Enforcement Timeline: Four Dates That Matter

The EU AI Act uses a phased rollout. Each date triggers a different set of obligations:

February 2, 2025 — Prohibited practices and AI literacy (ALREADY LIVE). Article 5 prohibitions are enforceable. Article 4 AI literacy requirements apply. If you're deploying AI systems that fall under the prohibited categories and haven't confirmed compliance, you're already exposed.

August 2, 2025 — GPAI model obligations (ALREADY LIVE). Providers of general-purpose AI models must comply with transparency and documentation requirements under Chapter V. Systemic risk models face additional obligations including adversarial testing and incident reporting.

August 2, 2026 — High-risk AI system obligations. The bulk of the regulation's requirements activate. High-risk systems listed in Annex III must meet Articles 9 through 15 requirements, complete conformity assessments under Article 43, bear CE marking per Article 48, and register in the EU database per Article 49.

August 2, 2027 — High-risk systems in regulated products. AI systems embedded in products covered by existing EU product safety legislation (Annex I) — medical devices, machinery, toys, aviation systems — must comply. These get the extra year because they layer onto existing regulatory frameworks.

The date teams miss

Article 4's AI literacy obligation landed with the February 2025 wave. It requires organizations to ensure their staff and anyone operating AI systems on their behalf have sufficient AI literacy. This isn't a suggestion. It's an enforceable requirement that applies to all AI system providers and deployers, regardless of risk classification. Most organizations are not tracking compliance against it.

Risk Classification: The Four-Tier Architecture

The Act classifies AI systems into four risk tiers. Your obligations depend entirely on where your system lands.

Unacceptable Risk (Prohibited — Article 5). Eight categories of AI systems are banned outright: subliminal manipulation causing significant harm, exploitation of vulnerabilities based on age or disability, social scoring by public authorities, criminal risk prediction based solely on profiling, untargeted facial recognition database scraping, emotion recognition in workplaces and educational institutions, biometric categorization inferring sensitive attributes, and real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions). Penalties reach 35 million euros or 7% of worldwide annual turnover.

High Risk (Annex III — Articles 6-49). This is where the regulatory weight concentrates. Annex III defines eight domains:

  1. Biometrics — remote identification systems, biometric categorization, emotion recognition systems
  2. Critical infrastructure — safety components in digital infrastructure, road traffic, water, gas, heating, electricity supply
  3. Education and vocational training — admissions decisions, student assessment, behavior monitoring, proctoring
  4. Employment and worker management — recruitment, screening, hiring decisions, performance evaluation, task allocation, promotions, terminations
  5. Essential services access — creditworthiness assessment, credit scoring, risk assessment and pricing for life and health insurance, evaluation of public assistance eligibility, emergency dispatch prioritization
  6. Law enforcement — risk assessment of individuals, polygraph systems, evidence reliability evaluation, profiling in criminal investigations
  7. Migration, asylum, and border control — risk assessment for irregular migration, document authenticity verification, asylum application evaluation
  8. Administration of justice — legal research, law interpretation and application to facts

If your AI system operates in any of these eight areas, you're likely subject to the full set of high-risk obligations.

Limited Risk (Transparency obligations). Systems like chatbots, emotion detection, and deepfake generators must disclose their AI nature to users. Lighter obligations, but still enforceable.

Minimal Risk. Everything else. No specific obligations, though voluntary codes of conduct are encouraged.

Article-by-Article: The High-Risk Compliance Stack

High-risk AI systems face a structured set of requirements across Articles 9 through 15. Each article addresses a distinct dimension of system trustworthiness.

Article 9 — Risk Management System. You must establish, implement, document, and maintain a continuous risk management system throughout the AI system's entire lifecycle. Not a one-time assessment. A living system that identifies known and foreseeable risks under intended use and reasonably foreseeable misuse, implements targeted mitigation measures, and evolves based on post-market monitoring data. If you've implemented NIST AI RMF, Article 9's risk management system maps closely to the MAP and MANAGE functions.

Article 10 — Data and Data Governance. Training, validation, and testing datasets must meet quality criteria. Article 10(2)(f) explicitly requires providers to identify, detect, prevent, and mitigate harmful biases that may result in discrimination. This means documenting data provenance, assessing representativeness, and validating that your data governance practices address fairness risks. For healthcare AI, where training data skew has produced documented harm, this article carries particular weight.

Article 11 — Technical Documentation. Providers must produce detailed technical documentation before placing the system on the market. Annex IV specifies what this must contain: system description, design specifications, development methodology, data requirements, risk management information, and validation procedures.

Article 12 — Record-Keeping. High-risk systems must enable automatic logging throughout the system's lifetime — sufficient to identify risk situations and support post-market monitoring.

Article 13 — Transparency. Deployers must receive enough information to interpret outputs and use the system appropriately: intended purpose, accuracy levels and limitations, foreseeable misuse scenarios, and human oversight measures. This is where many organizations discover their documentation standards fall short.

Article 14 — Human Oversight. Systems must allow effective human oversight during use — including the ability to fully understand capabilities and limitations, correctly interpret outputs, override decisions, and halt the system entirely. Article 14 requires more than a "human in the loop" checkbox. It demands that the human can meaningfully intervene.

Article 15 — Accuracy, Robustness, and Cybersecurity. Systems must achieve appropriate levels of accuracy, robustness, and cybersecurity, with resilience against errors, faults, and manipulation. For financial services AI, where adversarial attacks on credit models are a known threat, Article 15 intersects directly with existing operational risk requirements.

Extraterritorial Scope: Why Your Location Doesn't Matter

Article 2 applies to providers placing AI systems on the EU market regardless of where they're established. Under Article 2(1)(c), the Act also covers third-country providers and deployers where the output produced by the AI system is used in the Union. Providers of high-risk systems outside the EU must appoint an authorized representative within the EU.

This is broader than most teams realize. A US insurer using an AI model to price policies for EU-resident customers is in scope. A UK recruiter screening candidates for EU-based roles is in scope. The regulatory perimeter follows the impact, not the server.

Penalties: The Three Fine Tiers

The Act establishes graduated penalties calibrated to violation severity:

Tier 1 — Prohibited practices violations. Up to 35 million euros or 7% of total worldwide annual turnover for the preceding financial year, whichever is higher. This is the EU's strongest AI enforcement signal.

Tier 2 — High-risk system and GPAI obligations. Up to 15 million euros or 3% of global annual turnover. Applies to non-compliance with Articles 9-15 requirements, conformity assessment failures, and GPAI model provider obligations.

Tier 3 — Incorrect or misleading information to authorities. Up to 7.5 million euros or 1% of global annual turnover.

For SMEs and startups, the regulation applies the lower of the two amounts (percentage or fixed) rather than the higher — a proportionality provision that's easy to overlook.

Conformity Assessment and Market Access

Before placing a high-risk system on the EU market, providers must complete a conformity assessment (Article 43), draw up a declaration of conformity, affix CE marking (Article 48), and register in the EU database (Article 49). Most Annex III systems allow provider self-assessment. Systems in biometrics and critical infrastructure may require third-party assessment by a notified body. The EU database registration creates a public record of deployed high-risk systems with no equivalent in other regulatory frameworks.

What Compliance Officers Should Do Right Now

If you haven't started, here's the triage order:

Immediate: Confirm you're not deploying prohibited systems. Audit against all eight Article 5 categories. This deadline has passed. Violations carry the steepest penalties.

Next 30 days: Inventory and classify. Map every AI system your organization develops, deploys, or procures against the Annex III high-risk categories. This determines the scope of your compliance program.

Next 90 days: Gap assessment against Articles 9-15. For each high-risk system, document your current state against each article's requirements. The gaps become your compliance roadmap. Platforms like Starkguard can automate this process and track evidence per article.

By August 2026: Conformity and documentation. Complete conformity assessments. Finalize technical documentation per Annex IV. Prepare for CE marking and database registration.

Frequently Asked Questions

Does the EU AI Act apply to AI systems developed before August 2024? Yes, with a transition provision. Legacy systems already on the market before August 2, 2026 aren't immediately subject to new requirements, but any "significant change" after that date triggers full compliance. Systems placed on the market after August 2, 2026 must comply from day one.

What qualifies as a "provider" versus a "deployer"? A provider develops or commissions an AI system and places it on the market under their name. A deployer uses the system under their authority. Providers carry heavier obligations (conformity assessment, technical documentation, CE marking); deployers have operational responsibilities (human oversight, monitoring). An organization can be both for different systems.

How does the EU AI Act interact with GDPR? They're complementary. GDPR governs personal data processing; the AI Act governs AI system behavior and risk management. Article 10's data governance requirements explicitly reference EU data protection law. In practice, your GDPR impact assessments and AI Act risk management systems should feed each other.

Can we use NIST AI RMF compliance to satisfy EU AI Act requirements? Partially. NIST's MAP/MEASURE functions overlap with Article 9's risk management requirements. But the AI Act imposes specific obligations (CE marking, EU database registration, conformity assessment) that NIST doesn't address. Map the overlap to avoid duplicate work while covering each framework's unique requirements.

Build Your EU AI Act Compliance Program

The August 2026 deadline for high-risk systems is the single most consequential AI compliance date on the calendar. Starkguard maps your AI systems against Annex III categories, assesses compliance against Articles 9 through 15, and generates the documentation trail that conformity assessment demands.

Start your compliance assessment or schedule a demo to see how the platform handles EU AI Act classification and gap analysis.

Starkguard Team

AI Governance Experts

Tags:
eu-ai-act
compliance
ai-regulation
european-union

Ready to implement AI governance?

Start your free trial and put these insights into practice with Starkguard.

Start Free Trial

Related Articles