Compliance Guide

EU AI Act Compliance for Security Teams: What You Must Do by Each Application Phase

The EU AI Act applies in phases through 2026 and 2027. A phase-by-phase compliance guide for security teams: risk classification, documentation, post-market monitoring and Digital Omnibus implications.

EU AI Act Compliance for Security Teams: What You Must Do by Each Application Phase

The EU AI Act does not switch on in one moment. It applies in phases stretched across three years, and each phase triggers specific obligations for different actors in the AI supply chain. The confusion this produces in most organisations is not about what the Act requires — the text is public and the major requirements are not obscure — but about which parts apply to you, when, and what you are legally responsible for producing in evidence.

Security teams are caught in the middle of this. The Act is a compliance framework, but its technical requirements (risk management, data governance, human oversight, cybersecurity, post-market monitoring) land squarely in security and engineering workstreams. The legal, procurement and privacy functions often own the paperwork; security owns whether the paperwork reflects reality.

This guide sets out what applies by when, what security teams are actually on the hook for, and how to build an AI governance programme that produces the evidence the Act requires without drowning the organisation in documentation that nobody reads.

The application phases, ordered by what applies now

The AI Act was adopted on 21 May 2024 and entered into force on 1 August 2024. It applies in phases under Article 113. The dates that matter, in order:

2 February 2025 — prohibited practices and AI literacy. Article 5’s list of prohibited AI systems became enforceable. Social scoring, untargeted facial recognition scraping, emotion recognition in workplaces and schools (with narrow exceptions), predictive policing based solely on profiling, and real-time remote biometric identification in public spaces (with law enforcement exceptions) are prohibited. The AI literacy obligation under Article 4 also began: providers and deployers must ensure staff using AI systems have a sufficient level of AI literacy.

2 August 2025 — GPAI, governance and penalties. Rules for providers of General-Purpose AI models (GPAI) began to apply. Governance structures including the EU AI Office activated. Member States were required to designate competent authorities and have their penalty regimes in place. New GPAI models placed on the EU market from this date must comply immediately; GPAI models already on the market before this date have a grandfather period to 2 August 2027.

2 August 2026 — the main application date. The bulk of remaining obligations apply. Annex III high-risk AI system obligations become enforceable. Article 50 transparency obligations begin (AI-generated content labelling, chatbot disclosure, deepfake marking). Providers and deployers of high-risk AI systems intended to be used by public authorities must be in compliance. Every Member State must have at least one AI regulatory sandbox operational.

2 August 2027 — Article 6(1) high-risk products. The deadline for AI systems that are safety components of products already regulated under Annex I EU harmonisation legislation (toys, medical devices, machinery, radio equipment, etc.). Also the end of the grandfather period for GPAI models placed on the market before 2 August 2025.

31 December 2030 — Annex X large-scale IT systems. AI components of EU large-scale IT systems (SIS, VIS, Eurodac, EES, ETIAS, ECRIS-TCN) placed on the market before 2 August 2027 must be in compliance.

The Digital Omnibus wrinkle

In November 2025, the European Commission proposed a Digital Omnibus package that, among other things, would link the application of rules governing high-risk AI systems to the availability of harmonised standards and support tools. If adopted, this could effectively defer parts of the high-risk regime until the supporting standards ecosystem is ready. The proposal is under negotiation between the European Parliament and Council at time of writing.

Security teams should plan against the published timeline — 2 August 2026 for Annex III, 2 August 2027 for Article 6(1) — and treat any Digital Omnibus deferrals as an upside rather than a baseline. The compliance work needed to meet the published dates is the same work needed regardless of whether the application date formally shifts.

The risk classification, and why this is the first thing to get right

The Act structures obligations around four risk tiers. Getting this classification wrong is the most common and most expensive compliance error.

Unacceptable risk — prohibited. Listed in Article 5. If your AI system falls here, there is no compliance path; the system cannot be placed on or put into service in the EU. This is a short list but should be verified explicitly.

High-risk — regulated. The most demanding tier. Includes two categories:

  • Annex I safety components — AI used as a safety component of products already regulated under EU harmonisation legislation (medical devices, toys, machinery, lifts, etc.). Application date 2 August 2027.
  • Annex III use cases — AI used in specified high-stakes domains including biometric identification, critical infrastructure, education and vocational training, employment and HR, access to essential services, law enforcement, migration and border control, and administration of justice. Application date 2 August 2026.

Limited risk — transparency. Chatbots, emotion recognition, biometric categorisation, and generative AI outputs (including deepfakes). Subject to Article 50 transparency obligations — users must be informed they are interacting with AI or viewing AI-generated content. Application date 2 August 2026.

Minimal risk — unregulated. Everything else. Most AI applications currently in use, including recommendation engines, spam filters, AI-enhanced games, and most enterprise productivity applications.

GPAI models sit alongside this structure with their own obligations under Title VIII. GPAI models with “systemic risk” (generally those trained with compute exceeding 10^25 FLOPs or designated by the Commission) carry additional risk identification, mitigation, and incident reporting obligations.

A practical decision process

For every AI system in scope, security and governance teams should work through:

  1. Is this system prohibited under Article 5? Brief check against the list. If yes, stop — the system cannot operate in the EU.
  2. Is this system a safety component of a product under Annex I harmonisation legislation? Check against the list. If yes, it is high-risk under Article 6(1) and the 2 August 2027 deadline applies.
  3. Is this system used for one of the Annex III use cases? Check against the eight Annex III categories. If yes, it is high-risk and the 2 August 2026 deadline applies — unless the provider has documented under Article 6(3) that the system does not pose a significant risk of harm.
  4. Does this system fall under Article 50 transparency obligations? Chatbots, emotion recognition, biometric categorisation, AI-generated content. If yes, transparency obligations apply from 2 August 2026.
  5. Is this built on or incorporating a GPAI model? If your organisation is a downstream deployer of GPT, Claude, Gemini, Llama or similar models, you may not have GPAI provider obligations but you inherit documentation from the provider and are responsible for deployer obligations.

The Agentic AI governance playbook covers the agent-specific layer on top of this classification — agentic systems built on GPAI foundations often land in Annex III territory depending on use case.

Who you are, and why it matters

The Act distinguishes between roles, and obligations differ significantly by role:

Provider — the entity that develops an AI system or GPAI model and places it on the market or puts it into service under its own name or trademark. Heaviest obligations including conformity assessment, CE marking (for high-risk), and post-market monitoring.

Deployer — the entity that uses an AI system under its authority in the course of professional activity (excluding personal non-professional use). Obligations include transparency to affected persons, human oversight, monitoring of operation, and cooperation with the provider.

Importer — places an AI system from a non-EU provider on the EU market. Responsible for verifying the provider has met conformity requirements.

Distributor — makes an AI system available on the EU market but is not the provider or importer. Lighter obligations, but must verify CE marking and documentation.

Authorised representative — a natural or legal person in the EU designated by a non-EU provider to carry out specific tasks on the provider’s behalf.

A single organisation can hold different roles for different systems. A company that uses Microsoft Copilot is a deployer of that system; if it fine-tunes a model and deploys the fine-tuned version internally, its role may shift toward provider for that derived system. The value chain attribution rules under Article 25 set out when a deployer becomes a provider through substantial modification.

What security teams own, by obligation

The Act’s high-risk requirements are set out across Articles 9 to 15 and land heavily in security and engineering scope. Brief practical notes on what each requires:

Article 9 — Risk management system. Continuous iterative risk management process covering identification, estimation, evaluation, and mitigation of foreseeable risks across the AI system lifecycle. Must be documented and updated. In practice: a risk register specific to the AI system, tied to the ISMS if one exists, with evidence of periodic review.

Article 10 — Data governance. Training, validation and testing datasets must meet quality criteria — relevance, representativeness, correctness, and completeness with respect to intended purpose. Biases must be examined and mitigated. Data governance and management practices documented. This is where most organisations discover they have no documented record of what data trained the models they deploy.

Article 11 — Technical documentation. Annex IV specifies the required contents: system description, development process, design specifications, performance metrics, risk management documentation, monitoring arrangements, cybersecurity measures, conformity assessment evidence. This is a substantial document set, not a summary.

Article 12 — Record-keeping. Automatic logging of events throughout the system’s lifecycle sufficient to enable traceability. Logs must be retained for at least six months unless otherwise specified. Logging must capture inputs, outputs, affected persons, and operator identity where relevant.

Article 13 — Transparency to deployers. Instructions for use that enable deployers to interpret outputs and use the system appropriately. Accuracy, robustness and cybersecurity characteristics must be stated.

Article 14 — Human oversight. The system must be designed to enable effective human oversight during use. Measures include user interface features, clear instructions, and the ability for a human to interpret, override, or intervene. This is the requirement that rules out black-box autonomous deployment in high-risk contexts.

Article 15 — Accuracy, robustness and cybersecurity. The high-risk system must achieve appropriate levels of accuracy, robustness and cybersecurity throughout its lifecycle. Cybersecurity measures must address data poisoning, model poisoning, adversarial examples, and model evasion. Resilience against errors and inconsistencies must be designed in.

For security teams, Articles 12, 14 and 15 are the heaviest operational lift. Logging at the depth the Act requires usually exceeds what applications log natively. Human oversight requires product-surface features, not just policy documents. Cybersecurity against AI-specific attack vectors (prompt injection, data poisoning, model extraction) is a new discipline for most SOCs — the enterprise LLM security playbook is the adjacent reading.

Comparing the governance frameworks: EU AI Act vs NIST AI RMF vs ISO 42001

Most organisations approaching AI governance today are doing so under multiple regimes simultaneously. The three dominant frameworks complement rather than compete, but they differ in legal force and structure.

DimensionEU AI ActNIST AI RMFISO 42001
Legal statusRegulation (binding in EU)Voluntary framework (US)Certifiable management standard
Risk approachTiered by system risk categoryRisk-based, trustworthy AI functionsManagement system with risk controls
Core structureProhibitions + high-risk obligations + transparency + GPAIGovern, Map, Measure, ManageHarmonised Structure (aligned with ISO 27001)
CertifiableCE marking for high-riskNo certificationYes, by accredited bodies
Geographic scopeEU market-facingUS origin, globally influentialGlobal
PenaltiesUp to €35M or 7% global turnoverNoneCertificate loss, contractual
Best used forLegal compliance for EU marketStructured risk programmeAuditable management system

A pragmatic stack for most multinationals: ISO 42001 as the management system providing structure and auditability, NIST AI RMF as the risk methodology within it, and EU AI Act obligations layered on top as the legal compliance overlay for EU-market systems. Running all three separately is wasteful; running one without considering the others leaves gaps.

The ISO 27001:2022 transition guide covers the information security foundation that ISO 42001 builds on — both share the Harmonised Structure and integrated management systems can cover both standards efficiently.

The 2 August 2026 work plan

If you are a provider or deployer of Annex III high-risk systems, the published timeline gives you only months of preparation time for an obligation set that realistically takes 32 to 56 weeks of work for an unprepared organisation. The work breakdown that actually fits the calendar:

Now through Q2 2026 — inventory and classification. Complete AI system inventory across production and development. Classification against Article 5 (prohibited), Article 6 (high-risk), Article 50 (transparency), and GPAI status. This is not a spreadsheet exercise — most organisations discover meaningfully more AI systems than they expected, and the classification of borderline Annex III systems is genuinely difficult. Budget four to eight weeks with security, legal, and product teams involved.

Q2 2026 — gap analysis per high-risk system. Against Articles 9 to 15 and Annex IV technical documentation requirements. Output is a prioritised remediation plan per system with effort estimates. Budget two to four weeks per high-risk system for the analysis.

Q2–Q3 2026 — technical and governance remediation. Logging uplift for Article 12. Human oversight product features for Article 14. Cybersecurity hardening for Article 15. Documentation build for Article 11/Annex IV. This is the longest phase — typically 12 to 20 weeks per system.

Q3 2026 — conformity assessment. Internal testing and validation. For Annex III systems not requiring notified body involvement, the provider self-assesses against the requirements. For systems requiring notified body involvement (essentially those involving biometric identification), notified body selection, assessment booking, and remediation of findings. Notified body capacity is constrained — bookings are already stretched into late 2026.

Q3 2026 — transparency implementation for Article 50. Chatbot disclosure, AI-generated content labelling, deepfake marking. Product-level changes required.

2 August 2026 — application date. Enforcement begins.

Post-2 August 2026 — post-market monitoring and incident reporting. Ongoing obligation to monitor AI system performance in the field, report serious incidents, update technical documentation as systems evolve, and cooperate with market surveillance authorities.

Deployers face a compressed version of this timeline focused on their specific obligations — transparency to affected persons, human oversight during use, monitoring, and cooperation with providers. Deployers cannot rely on providers to meet their obligations for them.

The cross-pillar connections

AI Act compliance does not sit in isolation from other security and compliance programmes:

Third-party risk management becomes AI vendor risk management. Most organisations will be deployers of AI systems built by others. Vendor assessment needs to extend to AI-specific questions: training data governance, model cards, incident history, CE marking status, compliance posture. The third-party risk management framework covers the broader TPRM workflow into which AI vendor assessment fits.

Shadow AI undermines AI Act compliance from the first day. An organisation that has classified, documented and remediated its sanctioned AI systems has achieved nothing if employees are feeding sensitive data into personal ChatGPT accounts. Shadow AI detection and governance is the operational prerequisite for any meaningful AI Act compliance programme.

Software supply chain discipline extends to AI models. Annex IV technical documentation effectively demands a model bill of materials — what datasets, what base model, what fine-tuning, what dependencies. Organisations with mature SBOM and SCA practices are better positioned; those without are starting cold.

NIS2 and AI Act overlap on cybersecurity obligations. Organisations in scope of both the NIS2 directive and the AI Act face cybersecurity requirements from both. Sensible to use the same control set to evidence both where possible rather than running parallel programmes.

Penalties and why they will bite

The Act’s penalty regime is structured in three tiers under Article 99:

  • Up to €35 million or 7% of global annual turnover (whichever higher) for violations of prohibited practices under Article 5.
  • Up to €15 million or 3% of global annual turnover for violations of most other obligations including high-risk requirements.
  • Up to €7.5 million or 1% of global annual turnover for supplying incorrect, incomplete or misleading information to authorities.

These are maxima — actual fines will be calibrated to the gravity of infringement, duration, impact, and organisation size. SMEs and startups face proportionally reduced maxima.

The penalty ceiling matters less than two other factors. First, market surveillance authorities will be able to require withdrawal of non-compliant AI systems from the market — for SaaS products, this is an immediate commercial impact dwarfing any fine. Second, deployer obligations apply regardless of provider compliance; you cannot shelter behind an upstream vendor’s failings. If you deploy a non-compliant high-risk system, you are exposed in your own right.

The honest assessment

The EU AI Act is a legal framework wrestling with technology that moves faster than any legal framework can be written. Parts of it will age poorly. The GPAI provisions have already been stress-tested by the pace of model releases. The Article 6(3) self-assessment route (where providers document that an Annex III system does not pose a significant risk) will be the first major litigation battleground. The Digital Omnibus proposal reflects acknowledgment inside the Commission that the original timeline outruns the supporting standards ecosystem.

None of this is an excuse to wait. Organisations that have treated AI governance as a real programme rather than a compliance exercise are finding the Act’s requirements substantive but manageable. Organisations that have not have discovered they need the inventory, the classification, the documentation, the human oversight, the logging, and the cybersecurity hardening whether the Act exists or not — the Act is making them do in 12 months what they should have been doing for the past three years.

For security teams specifically, the Act is an opportunity as much as an obligation. It puts AI risk on the board agenda, funds the tooling and documentation that was previously impossible to resource, and forces the treatment of AI systems as first-class objects in the security programme rather than features of other products. Use the pressure.

Frequently asked questions

When does the EU AI Act fully apply? The Act applies in phases: prohibited practices from 2 February 2025, GPAI rules and governance from 2 August 2025, Annex III high-risk obligations and Article 50 transparency from 2 August 2026, and Article 6(1) product-based high-risk systems from 2 August 2027. Annex X large-scale IT systems have until 31 December 2030.

Does the EU AI Act apply to non-EU companies? Yes. The Act applies to providers placing AI systems on the EU market regardless of where the provider is established, to deployers located in the EU, and to providers and deployers outside the EU whose AI system outputs are used in the EU. Non-EU providers must designate an EU authorised representative.

What is a high-risk AI system under the Act? Two categories: AI systems used as safety components of products regulated under Annex I EU harmonisation legislation (medical devices, toys, machinery, etc.), and AI systems used for one of the eight Annex III use cases (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration and border control, administration of justice). Annex III providers can self-document under Article 6(3) that a system does not pose significant risk.

What’s the difference between a provider and a deployer? A provider develops or has an AI system developed and places it on the market under its own name. A deployer uses an AI system under its authority in professional activity. Obligations differ significantly — providers carry the heaviest obligations around conformity assessment and technical documentation, while deployers focus on transparency to affected persons, human oversight, monitoring, and cooperation with providers.

What are the maximum penalties under the EU AI Act? Up to €35 million or 7% of global annual turnover (whichever higher) for prohibited practices violations. Up to €15 million or 3% of global annual turnover for violations of most other obligations including high-risk requirements. Up to €7.5 million or 1% of global annual turnover for supplying incorrect information to authorities.

How does the EU AI Act interact with GDPR? The AI Act operates alongside GDPR rather than replacing it. GDPR continues to govern processing of personal data by AI systems. The AI Act adds AI-specific obligations on top — data governance under Article 10, transparency under Article 50, human oversight under Article 14. Organisations will generally need both a DPIA (under GDPR) and an AI Act conformity assessment for high-risk AI systems processing personal data.

What is the Digital Omnibus proposal and how does it affect AI Act compliance? The Digital Omnibus is a legislative proposal from the European Commission in November 2025 that, among other things, proposes linking the application of high-risk AI system rules to the availability of harmonised standards. If adopted, parts of the high-risk regime could effectively defer until the standards ecosystem is ready. The proposal is under negotiation. Security teams should plan against published dates and treat any deferrals as upside.

Do we need both ISO 42001 certification and EU AI Act compliance? ISO 42001 certification is voluntary; EU AI Act compliance is legally binding for in-scope activities. They are complementary — ISO 42001 provides a certifiable management system structure; the AI Act imposes specific legal obligations. Many organisations will pursue both, using ISO 42001 as the evidence framework for AI Act obligations.

What happens if our AI vendor isn’t compliant? Deployer obligations apply independently of provider compliance. You cannot shelter behind an upstream vendor’s failings — if you deploy a non-compliant high-risk AI system, you are exposed in your own right. Vendor risk assessment must include AI Act compliance status, and contracts should include representations, cooperation, and indemnity provisions specific to AI Act obligations.