Back to Insights
Framework Guide
15 min read

EU AI Act Compliance Timeline: What Deployers Must Do Before August 2, 2026

A deployer-focused compliance timeline for the EU AI Act August 2026 deadline. Covers Articles 14, 26, and 73 obligations with month-by-month action items.

·Starkguard Team
Share:

You're a Deployer, Not a Provider. Here's Why August 2 Still Keeps You Up at Night.

Most EU AI Act guidance fixates on providers — the companies that build and place high-risk AI systems on the market. Providers carry the heaviest load: conformity assessments, CE marking, EU database registration, Annex IV technical documentation. Deployers read that list and exhale.

That relief is misplaced. Regulation (EU) 2024/1689 assigns deployers a distinct set of obligations under Articles 26, 14, and 73 that take effect on August 2, 2026 — the same date as the provider requirements. And while deployer obligations are narrower, they carry the same penalty structure: up to 15 million euros or 3% of global annual turnover under Article 99(4) for non-compliance.

If you procure, configure, or operate a high-risk AI system under your own authority — even one built and certified by someone else — you are a deployer. This article is for you.

What "Deployer" Actually Means Under the Act

Article 3(4) defines a deployer as any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. The key phrase is "under its authority." If your organization makes the operational decision to deploy an AI system — chooses when, where, and how it runs — you are the deployer. It does not matter that you did not build it.

This catches more organizations than most expect. A bank using a vendor's credit-scoring model is a deployer. A hospital system running a third-party diagnostic tool is a deployer. A government agency procuring an automated benefits-eligibility system is a deployer. The compliance burden follows the deployment decision, not the development effort.

For a broader overview of the regulation's structure and risk tiers, see our EU AI Act requirements guide. For foundational context on the regulation itself, see What is the EU AI Act?.

The Three Articles That Define Deployer Obligations

Deployers face obligations scattered across the regulation, but three articles contain the operational core.

Article 26 — Deployer Obligations

Article 26 is the central deployer article. It establishes seven distinct requirements:

  • Article 26(1): Take appropriate technical and organisational measures to ensure you use high-risk AI systems in accordance with the instructions of use accompanying them.
  • Article 26(2): Assign human oversight to natural persons who have the necessary competence, training, and authority — and the support to exercise that oversight. This is not a checkbox. The humans you designate must be able to intervene.
  • Article 26(4): Monitor the operation of the high-risk AI system based on the instructions of use. When you have reason to consider the system presents a risk, inform the provider or distributor and the relevant market surveillance authority.
  • Article 26(5): Keep the logs automatically generated by the system, to the extent they are under your control, for at least six months (unless sector-specific law requires longer). These logs are your primary evidence artifact.
  • Article 26(6): Use information from the provider to carry out a data protection impact assessment (DPIA) where required under GDPR.
  • Article 26(8): Inform natural persons that they are subject to a high-risk AI system before or at the time of first exposure, unless this is apparent from the circumstances. Workers and their representatives must also be informed.
  • Article 26(9): Cooperate with market surveillance authorities and provide them with access to automatically generated logs.

The through-line: deployers are responsible for operational governance. You must use the system as instructed, staff oversight with qualified humans, monitor continuously, retain logs, and cooperate with authorities.

Article 14 — Human Oversight

Article 14 defines what human oversight must look like for high-risk systems. While Article 14 primarily imposes design obligations on providers (the system must be built to allow effective oversight), Article 26(2) makes it the deployer's job to actually execute that oversight in practice.

Article 14(4) spells out what the oversight function requires:

  • Understand the system's capabilities and limitations (14(4)(a))
  • Be aware of automation bias and able to guard against it (14(4)(b))
  • Correctly interpret the system's output, accounting for the characteristics of the input data and the system's design (14(4)(c))
  • Decide not to use the system or to disregard, override, or reverse its output in any particular situation (14(4)(d))
  • Intervene or interrupt the system through a "stop" button or similar procedure (14(4)(e))

This matters operationally because you need documented evidence that your oversight personnel can do each of these five things. A rubber-stamp review process where a human clicks "approve" on every output does not satisfy Article 14. The regulation explicitly names automation bias as something oversight must guard against — your process needs to show that humans are genuinely evaluating, not merely ratifying.

Article 73 — Reporting Serious Incidents

Article 73(1) requires deployers to report serious incidents to the market surveillance authority of the Member State where the incident occurred. A "serious incident" under Article 3(49) means an incident or malfunctioning that directly or indirectly leads to death, serious damage to health, serious and irreversible disruption to the management of critical infrastructure, breach of fundamental rights obligations, or serious damage to property or the environment.

The timeline is tight: deployers must report immediately after establishing a causal link between the AI system and the serious incident, and no later than 15 days after becoming aware of the incident.

This is not an annual filing. This is event-driven reporting with a hard deadline. You need a process that identifies qualifying incidents, establishes causation, and produces a notification to the correct Member State authority — all within 15 days.

What Compliance Looks Like: The Deployer Evidence Stack

Talk to auditors and market surveillance authorities, and the question is always the same: can you show me? Compliance is documentation plus operational practice, and the documentation must be contemporaneous — assembled as you go, not reconstructed after the fact.

Here is the evidence stack deployers should be building:

AI System Inventory. A complete register of every AI system deployed under your authority, classified by risk tier. For each system: purpose, provider, deployment date, affected populations, data types processed, geographic scope, and regulatory requirements that apply. Article 26 assumes you know what you are deploying. You cannot monitor a system you have not cataloged.

In Starkguard, the AI System Inventory supports bulk import via CSV and JSON with 14 fields per system — including dataTypes, regulatoryRequirements, and geographicScope — so you can populate your register from existing procurement records without manual re-entry.

Risk Classification Records. For each system, documented evidence of how you classified it against Annex III categories. If you determined a system is not high-risk, document why. If it is, document which Annex III category applies and which articles you are subject to.

Starkguard's EU AI Act Assessment walks through Annex III risk classification and produces a deployer obligations checklist specific to each system's classification result.

Human Oversight Logs. Contemporaneous records of human oversight activity for each high-risk system. Every time a human reviewer evaluates, agrees with, overrides, or reverses an AI output, that decision should be logged with the reviewer's identity, timestamp, the system output reviewed, and the reviewer's decision.

The Human Oversight Log in Starkguard records agreed and overridden outcomes per system, per decision. Each entry captures who reviewed, what they decided, and why — building the Article 14 evidence trail automatically.

Incident Records. A register of incidents, near-misses, and anomalies. For qualifying serious incidents under Article 73, you need the incident description, identified root cause, affected individuals, Member State, and notification status. Draft notification text should be prepared within hours, not days.

Starkguard's Incident Logging tracks severity, affected systems, and timeline. For Article 73 qualifying incidents, the system generates notification drafts with the required fields pre-populated, so your legal team reviews and submits rather than authoring from scratch.

Automatic Logs. Article 26(5) requires you to retain system-generated logs for at least six months. These must be under your control and available to market surveillance authorities on request.

The Centralized Audit Log captures system events with IP address and user-agent tracking, exportable for authority requests. The Document Vault provides per-system file storage with versioning and tagging for technical documentation, provider instructions of use, and any supporting artifacts.

Attestation and Evidence Bundles. When authorities request evidence — or when your internal audit function runs a compliance review — you need a consolidated package, not a scavenger hunt across five systems.

Starkguard's Compliance Attestation PDF produces a signed posture summary with per-framework compliance scores. The Evidence Package bundles eight sections of per-system evidence: assessments, oversight logs, incident records, documents, and full audit trail into a single downloadable artifact.

The Six-Month Compliance Timeline for Deployers

August 2, 2026 is 139 days from the date of this article. Here is what each phase should contain.

Phase 1: Six Months Out (February 2026) — Inventory and Classify

If you are reading this in March 2026, this phase is already behind schedule. Do it now.

Complete your AI system inventory. Identify every AI system your organization deploys. Not just the ones the data science team knows about — the ones procurement purchased, the ones embedded in vendor SaaS products, the ones a department head signed up for on a free trial. Shadow AI is your biggest classification gap.

Classify each system. Run every inventoried system through Annex III. Determine which are high-risk, which are limited-risk, and which are minimal-risk. Document the reasoning for each classification. A system that processes job applications (Annex III, area 4) is high-risk regardless of what the vendor's marketing materials say about it.

Identify your providers. For each high-risk system, confirm the provider. Request their conformity assessment documentation, instructions of use, and technical documentation summary. Article 26(1) requires you to follow the instructions of use — you need them in hand.

Action items:

  • Export existing procurement and IT asset records into your AI system register
  • Run Annex III classification for each system
  • Request provider documentation for all high-risk systems
  • Flag any systems where the provider has not completed their own compliance preparation

Phase 2: Three Months Out (May 2026) — Build Operational Processes

With your inventory classified and your provider documentation collected, build the operational processes that the regulation requires.

Staff your human oversight function. For each high-risk system, designate the natural persons who will perform oversight per Article 26(2). These people need three things: competence (they understand the system), authority (they can halt or override it), and capacity (they have time to do it properly). A compliance officer who rubber-stamps 200 AI decisions per day has authority but not capacity.

Train your oversight personnel. Article 14(4)(b) explicitly references automation bias. Your training program must cover what the system does, how to interpret its outputs, when to override, and how automation bias manifests in your specific deployment context. Generic "AI awareness" training does not satisfy this requirement.

Establish your incident response process. Define what constitutes a serious incident in your operational context. Map each high-risk system to the Member State(s) where incidents would be reported. Identify who in your organization has authority to file a notification. Draft template notifications that can be completed and submitted within the 15-day window.

Set up log retention. Confirm that automatic logs from each high-risk system are being captured and retained for at least six months. If your provider's system does not produce logs that are under your control — raise this with the provider immediately. Article 26(5) makes this your problem, not theirs.

Action items:

  • Assign named oversight personnel per system with documented competence and authority
  • Develop and deliver oversight training that addresses automation bias
  • Create incident classification criteria and response procedures
  • Verify log capture and retention for each high-risk system
  • Run a tabletop exercise: simulate a serious incident and walk through your notification process

Phase 3: One Month Out (July 2026) — Test, Attest, Package

The final month is not for building. It is for testing what you built and producing the evidence artifacts that prove it works.

Run a compliance dry run. For each high-risk system, walk through the full deployer obligation set: Can your oversight personnel demonstrate all five Article 14(4) capabilities? Is your log retention functioning? Can you produce an incident notification within the required timeline? Where the answer is no, you have weeks to fix it — not months.

Generate your attestation. Produce a compliance attestation that documents your posture across all deployer obligations. This is your internal record that compliance was assessed before the deadline, not manufactured after an enforcement inquiry.

Package your evidence. For each high-risk system, compile the evidence bundle: risk classification, oversight personnel assignments, training records, log retention confirmation, incident response procedures, provider documentation, and any assessment results. This package is what you hand to a market surveillance authority if they knock.

Action items:

  • Complete compliance assessment for every high-risk system
  • Generate per-system evidence packages
  • Produce compliance attestation with posture scores and sign-off
  • Brief executive leadership on compliance status and residual gaps
  • Establish ongoing monitoring cadence for post-deadline operations

After August 2: This Is Not a One-Time Exercise

The deployer obligations under the EU AI Act are ongoing. Article 26(4) requires continuous monitoring. Article 73 incident reporting is event-driven with no end date. Log retention under Article 26(5) is a rolling six-month window. Human oversight under Article 14 must be active for as long as you deploy the system.

This means your compliance program needs a steady-state operating model:

  • Monthly: Review human oversight logs for each high-risk system. Are reviewers actively evaluating outputs or has the process degraded into auto-approval?
  • Quarterly: Reassess your AI system inventory. New procurements, decommissioned systems, and scope changes all affect your compliance posture.
  • On event: Execute incident response procedures when qualifying incidents occur. The 15-day reporting clock starts when you become aware, not when you decide to investigate.
  • Annually: Full compliance reassessment. The regulatory environment will evolve — delegated acts, implementing acts, and market surveillance authority guidance will refine what "compliance" means in practice.

How Starkguard Maps to Deployer Obligations

Every feature referenced in this article exists in the platform today. Here is the mapping:

Deployer ObligationEU AI Act ArticleStarkguard Feature
AI system registerArticle 26(1)AI System Inventory with CSV/JSON bulk import
Risk classificationAnnex IIIEU AI Act Assessment with deployer obligations checklist
Human oversight executionArticles 14 + 26(2)Human Oversight Log (agreed/overridden per system)
Incident reportingArticle 73Incident Logging with notification drafts + severity tracking
Log retentionArticle 26(5)Centralized Audit Log with IP/UA tracking + export
DocumentationArticle 26(1)Document Vault with versioning and tagging
Compliance evidenceArticle 26(9)Evidence Package (8-section per-system bundle)
Posture attestationGeneralCompliance Attestation PDF with signature block

The platform also runs assessments across NIST AI RMF, OECD AI Principles, ISO 42001, RAPID, KSA AI Governance, and UAE AI Ethics — with 847 knowledge elements across all seven frameworks and AI-generated Action Plans that prioritize remediation steps by risk severity and deadline proximity.

The Penalty Asymmetry Deployers Overlook

Providers face higher maximum fines for the most severe violations (35 million euros or 7% of turnover for prohibited practices). But deployers face the same 15 million euro or 3% threshold as providers for non-compliance with high-risk system obligations under Article 99(4). For most mid-market companies, 3% of global turnover is an existential penalty.

And here is the part that does not get enough attention: market surveillance authorities can order you to withdraw or recall a high-risk AI system from the market under Article 21. For a deployer whose operations depend on that AI system — an automated underwriting engine, a clinical decision support tool, an eligibility determination system — forced withdrawal is more damaging than the fine.

The business case for deployer compliance is not "avoid a fine." It is "keep operating."

Start Now. Not After Budget Approval, Not After the Next Board Meeting.

The organizations that will be ready on August 2 are the ones that started in Q1 2026 or earlier. The ones that will scramble are the ones that treated "deployer" as a lesser category of obligation and assumed their providers would handle everything.

Your providers handle conformity assessment and CE marking. They do not handle your human oversight staffing, your incident response procedures, your log retention verification, or your evidence packaging. Those are deployer obligations and they belong to you.


Map your high-risk AI systems, build your Article 14 oversight evidence, and generate your deployer compliance attestation. Start your free trial and see your deployer obligation status before August 2.

Starkguard Team

AI Governance Experts

Tags:
eu-ai-act
compliance
deadline
deployer-obligations
timeline

Ready to implement AI governance?

Start your free trial and put these insights into practice with Starkguard.

Start Free Trial

Related Articles