State of Guide

The State of AI Security 2026: Threats, Defences, Governance and the Agentic Shift

The state of AI security in 2026: deepfake fraud, LLM attacks, the agentic SOC, AI governance, and the enterprise attack surface reshaped by autonomous agents.

The State of AI Security 2026: Threats, Defences, Governance and the Agentic Shift

AI security in 2026 is no longer a forward-looking discipline. The attack surface has arrived, the defensive vendor landscape has consolidated enough to have winners and losers, regulation is live with more on the way, and the two categories that barely existed eighteen months ago — agentic AI governance and AI-native security operations — are now where most of the interesting budget is moving.

The framing that defines this year, and that most CISOs are still catching up to, comes from Gartner’s 2026 top cybersecurity trends, published on 5 February 2026. The top trend, explicitly named, is Agentic AI Demands Cybersecurity Oversight. The fourth trend is Identity and Access Management Adapts to AI Agents. The third is Postquantum Cryptography, which sits at the intersection of AI security and cryptographic modernisation. Three of the top six cybersecurity trends for 2026, on the analyst research most enterprise CISOs read, are specifically about AI.

Gartner’s updated information security forecast projects the AI-amplified security market reaching $160 billion by 2029, up from $49 billion in 2025, and over 75% of enterprises using AI-amplified cybersecurity products by 2028, up from less than 25% in 2025. Worldwide information security spending is projected at $244.2 billion in 2026. The scale is not the story. The direction is: AI security has become the centre of gravity of the cybersecurity market, not an adjacent specialisation.

This is the honest state of AI security as of mid-2026, across five areas that define the year: the threat landscape, the defensive vendor landscape, agentic AI and the governance gap, AI in the SOC, and the regulatory picture.

The AI security landscape at a glance

DomainMaturity in 2026Primary attacker behaviourPrimary defensive capabilityDirection of travel
Deepfake-enabled BECOperational, frequentVoice cloning, synthetic video, CEO fraudCallback verification, deepfake detection toolingScaling fast; defence lagging
LLM / GenAI attacksEmerging, high-impactPrompt injection, jailbreaks, data exfiltrationLLM guardrails, runtime detection, AI-DLPRapidly maturing
Agentic AI threatsEarly, high-potentialAgent hijacking, credential abuse, tool misuseAgent identity management, scoped authorisationMost under-defended category
AI-augmented ransomwareConfirmed, wideningAI-driven reconnaissance, adaptive payloadsBehavioural EDR/XDR, identity-first defenceScaling attacker side faster than defender
Shadow AIPervasiveInadvertent data leakage via personal GenAIAI discovery, policy-driven DLP, sanctioned alternativesGovernance catching up
AI defence platformsMature categoryN/ADarktrace, Vectra AI, SentinelOne Purple AI, Microsoft Security CopilotConsolidating
AI-SOC automationEmerging, hypedN/ATorq, Palo Alto XSIAM, Microsoft Security Copilot, Google SecOps30% of Tier-1 claim is marketing; reality is 10–15%
PQC migrationStrategic, underwayN/ANIST-standardised PQC algorithms, crypto agilityQuantum threat 2030; “harvest now, decrypt later” is current

The threat landscape: what attackers are actually doing with AI

Three attack patterns have moved from possibility to production in the past twelve months. They are worth separating carefully because the defensive response to each is different.

Deepfake-enabled BEC is the operational attack pattern of 2026. Voice cloning has dropped from requiring specialist expertise to being available in commodity tools. Synthetic video is not yet at parity with synthetic voice, but it is closer than most organisations assume. The FBI’s IC3 reporting continues to show BEC as a top-three loss category, and the qualitative shift is that AI-generated variants are now the growth segment. Gartner’s 2025 survey found 62% of organisations experienced a deepfake attack, and 32% faced an attack on AI applications directly.

The attacker pipeline is unglamorous and effective: scrape LinkedIn and public sources for a target executive, synthesise a minute or two of their voice from any public audio (earnings call, conference keynote, podcast), and use it to authorise a wire transfer or a credential reset via a help desk. The success rate is high because the organisational controls against voice-based social engineering were designed when voice cloning was expensive and rare.

Defensive response is not complicated but requires discipline. Out-of-band callback verification to a pre-established number, not whichever number the caller specifies. Transaction limits that require multi-person authorisation above thresholds. Help desk scripts that include identity verification that voice alone cannot defeat. Deepfake detection tooling from Reality Defender, Pindrop, Hive AI, and others is useful for high-risk channels but is not a substitute for process controls. Our deepfake voice fraud defence guide covers the attack pipeline and the specific defensive stack in detail.

LLM and GenAI attacks have consolidated around the OWASP LLM Top 10. Prompt injection is the dominant attack class — both direct (where an attacker supplies malicious prompts via the input channel) and indirect (where malicious instructions are embedded in data the LLM processes, like a webpage, document, or email). Jailbreaks, data exfiltration via model inversion, and training data poisoning round out the top concerns.

The sharp question for most enterprises is not “are we exposed to LLM attacks” but “which of our LLM deployments has a meaningful attack surface, and what is it connected to?” A customer support chatbot with access to a knowledge base is one threat model. A coding assistant with repository access is another. An agentic system with tool-use capability that can send email, create tickets, or write code is a fundamentally different threat model — which brings us to agentic AI.

Microsoft’s documented Copilot Studio prompt injection incident, where data was exfiltrated despite the patched vulnerability, makes the structural point clearly: AI agent credentials live in the same logical scope as untrusted input. When the agent has tool-use authority, the attacker does not need to compromise the agent; they need to supply input that the agent acts on. Our prompt injection defence guide covers the technical taxonomy and the vendor landscape for LLM security.

AI-augmented ransomware is now documented, not speculative. Intelligence reporting from multiple sources through Q1 2026 confirms that Akira, Qilin, and Scattered Spider (operating under the combined “Scattered LAPSUS$ Hunters” banner since August 2025) have integrated AI agents into their attack pipelines. The FBI’s Cyber Division reported AI-assisted intrusions increased 340% year-over-year in 2025. Mandiant’s M-Trends 2026 report documents that the median time between initial compromise and access hand-off to a follow-on operator has collapsed from over eight hours in 2022 to 22 seconds in recent cases, facilitated by automation that increasingly includes AI.

The attacker’s AI stack is straightforward: scrape LinkedIn, GitHub, company websites, and job postings to build target profiles at scale. Generate hyper-personalised spear-phishing content that evades pattern-matching defences. Identify vulnerabilities in observed tech stacks. Prioritise data for exfiltration. The productivity gain is real — the economics of the attacker side have shifted, and defenders have not fully adjusted.

What works defensively is not novel but is now more important: behavioural EDR/XDR that catches anomalies post-compromise, identity-first defence that assumes some users will be socially engineered, immutable backups that survive a full encryption event, and tested incident response. Our state of ransomware 2026 hub covers the ransomware-specific defensive posture in depth.

The defensive AI platform landscape

The AI-native security platform market has matured enough to have a defensible top tier, a credible second tier, and a long tail of acquired-into-stack-players and startups. The top tier in 2026, based on a combination of detection efficacy in independent testing, enterprise deployment scale, and genuine AI-native architecture (as opposed to ML-layer bolted onto signature-based engines), looks like this:

Darktrace remains the category-defining AI-native detection platform, with eleven years of product maturity and the deepest per-customer behavioural baselining. The criticism that Darktrace produces high alert volumes compared to traditional signature systems is well-documented and, in our view, partly valid and partly the inherent cost of anomaly-based detection. What has changed in 2026 is Darktrace’s Cyber AI Analyst — their agentic triage layer — which has moved from marketing claim to operational product for a meaningful number of customers.

Vectra AI holds the strong second position, particularly in network detection and response. Vectra’s structured approach to attack progression (reconnaissance, lateral movement, command and control, exfiltration) maps cleanly onto MITRE ATT&CK and produces more interpretable detections than Darktrace for SOC teams transitioning from signature-based tooling.

SentinelOne Purple AI represents the endpoint-native entry. SentinelOne’s advantage is that the AI layer sits on top of a strong EDR foundation, so buyers are not choosing between AI detection and endpoint coverage. The platform’s agentic triage capability has developed rapidly over the past year.

Microsoft Security Copilot is the wildcard. Microsoft’s advantages — Entra ID data, Microsoft 365 telemetry, Sentinel SIEM integration, Defender suite integration — are structural and significant. Customers already deep in the Microsoft security stack find Security Copilot increasingly compelling. Customers with multi-vendor stacks find the Microsoft-centric integration patterns limiting.

Palo Alto Networks Prisma AIRS and XSIAM complete the first tier. Palo Alto’s strategy of bundling AI security capabilities into a broader platform narrative works for customers consolidating vendors; it is less compelling for customers trying to evaluate AI security capability in isolation.

Take a clear position: the AI defence platform decision is increasingly a platform decision, not a product decision. Buying Darktrace because you want the best AI detection makes sense when you are running a heterogeneous environment with multiple existing security tools and you want a detection overlay. Buying Microsoft Security Copilot makes sense when you are already committed to Microsoft’s security stack. Buying SentinelOne Purple AI makes sense when you want AI-native capability integrated with your EDR. The question “which AI security platform is best” is less useful than “which platform fits my existing architecture and operating model.”

Our AI platform comparison walks through the detailed evaluation criteria, including honest efficacy assessment — no vendor delivers the 99%+ detection rate the marketing implies, and the meaningful comparisons are about tuning burden, alert quality, and integration depth.

The second tier deserves brief mention because buyers increasingly encounter these vendors in procurement. HiddenLayer and Protect AI lead the specifically-LLM-security category, with focus on model theft, adversarial ML detection, and AI supply chain integrity. Lakera and Robust Intelligence compete on LLM runtime guardrails — the layer that sits between user input and the LLM, catching prompt injection, jailbreaks, and data leakage in real time. Cisco’s Robust Intelligence acquisition in 2024 has shifted the competitive picture here; expect further consolidation. On the network side, ExtraHop and Corelight sit adjacent to Darktrace and Vectra with strong AI-amplified NDR capability, particularly for organisations that prioritise the network visibility side over the endpoint side.

The efficacy conversation is where most buyer conversations go sideways. Every vendor in this space publishes detection efficacy numbers — 99.7%, 99.9%, “near-perfect” — and those numbers are functionally useless for comparison. They are measured on different datasets, with different definitions of detection success, different false positive thresholds, and different operational assumptions. The numbers that actually matter for a buyer are alert-to-investigation ratio (how many alerts do your analysts have to chase before finding a true positive), mean time to tune (how long before the platform is producing useful output in your environment), and integration quality with your existing stack. Independent testing from MITRE Engenuity’s ATT&CK Evaluations and, for specific product categories, from AV-Comparatives and SE Labs, provides more useful comparative data than vendor-supplied numbers — though even those tests have known limitations.

The uncomfortable truth is that the AI defence platform decision is increasingly about which vendor’s tuning burden and operating assumptions fit your SOC capacity, not which vendor has the “best” AI. A platform that produces 200 high-confidence alerts per day is worse than a platform that produces 20 alerts with 15 true positives, even if the first platform has technically higher recall. The buyers who ask the right questions — “what does a typical customer’s alert volume look like after six months?” and “what percentage of alerts result in confirmed incidents?” — get more useful answers than the buyers who demand competitive efficacy comparisons.

Agentic AI: the governance gap that will define 2026 and 2027

If one section of this hub matters more than the rest, it is this one. Agentic AI — AI systems that can take autonomous actions, use tools, maintain state across interactions, and operate without continuous human supervision — is the most significant shift in the enterprise security landscape since the cloud transition.

Gartner’s numbers tell the direction clearly: 40% of enterprise applications will be integrated with task-specific AI agents by end of 2026, up from less than 5% in 2025. Gartner’s best-case projection has agentic AI driving approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion. Palo Alto Networks has publicly cited an 82:1 agent-to-human user ratio in some enterprise environments.

And the governance picture: most organisations cannot currently produce an inventory of AI agents in use. They cannot distinguish sanctioned agents from unsanctioned ones. Their identity and access management is built for human users and machine-to-machine service accounts, not for autonomous agents making decisions within policy boundaries. Their incident response playbooks do not cover the scenarios where an agent makes a wrong call or is socially engineered via prompt injection.

Take the position: agentic AI is the most under-defended high-consequence attack surface in enterprise IT in 2026. The sophistication of the defence is roughly twelve to eighteen months behind where it needs to be, and the attacker understanding of agent architectures is catching up faster than defender understanding.

What a functional agentic AI governance program looks like — and this is the argument we develop in full in our agentic AI security CISO playbook:

  1. Inventory. An auditable register of every AI agent in use across the organisation, including shadow agents deployed by business units or individuals without IT involvement.

  2. Identity. Every agent has a first-class identity in the IAM system. Not a shared service account, not a human credential, not an API key in a config file. Gartner’s Top 2026 Trend #4 is explicitly about this: “IAM adapts to AI agents.” Machine identity platforms — Aembit, Astrix, Andromeda, CyberArk’s machine identity offering, SailPoint — are where this capability is emerging. Our AI agent identity management guide covers the vendor landscape.

  3. Scoped authorisation. Agents operate with least-privilege, with the privilege scoped per-task rather than per-session where possible. An agent that schedules meetings does not have read access to the inbox, let alone write access. An agent that writes pull requests does not have production deployment rights.

  4. Observability. What the agent did, why it did it, what data it accessed, what tools it invoked — all logged, ideally in a format that supports both forensic investigation and ongoing behavioural monitoring.

  5. Tool-use security. For agents with tool-use capability, the tool interfaces themselves need to be scoped. If the agent can call a “send email” tool, the tool should enforce recipient whitelists, subject-line policies, and content filters — not just trust whatever the agent asks for.

  6. Incident response. Playbooks that cover the specific agent incident scenarios: hijacked agent, prompt-injected agent, agent making a policy-violating decision, agent being used as a pivot by an attacker.

  7. Reversibility. Where possible, agent actions should be auditable and reversible. An agent that mistakenly approves an expense report should generate a reversible action, not a final decision.

None of this is novel in concept. It is how you would govern a junior employee in a high-stakes role. What is novel is the scale at which it has to operate — thousands of agents per organisation, not dozens of new hires — and the pace at which agent platforms are proliferating through no-code and low-code tools.

Shadow AI: the governance baseline

Before an organisation can govern sanctioned AI agents, it has to understand the unsanctioned ones. Gartner’s 2026 research finds that 57% of employees use personal GenAI accounts for work, and 33% of them admit to uploading sensitive data to tools their security teams have not approved.

Shadow AI is not a new category of problem. It is shadow IT with higher attack surface. The specific risks that distinguish it from generic shadow SaaS are:

  • Data that is input to a personal GenAI account may be used for training, may be visible to vendor employees, may leak via vendor breaches, and may cross jurisdictional boundaries without oversight.
  • The output from shadow GenAI tools is increasingly being acted on in business-critical contexts without validation.
  • Employees using shadow AI tools often cannot articulate what data they have shared, which makes breach response harder.

The defensive capabilities for shadow AI detection and governance come from an adjacent part of the security stack: SASE/SSE platforms (Netskope, Zscaler, Palo Alto Networks, Cloudflare), CASB, and dedicated AI-DLP tools (Harmonic Security, Nightfall, Metomic). Our shadow AI detection and governance guide covers the vendor landscape and the policy design question.

The position worth taking: shadow AI is best addressed by providing sanctioned AI tools that are as capable and accessible as the shadow alternatives, with data governance built in. Prohibition without alternatives produces policy violations; alternatives without prohibition produce organised shadow usage. Both need to happen.

AI in the SOC: the 30% claim reality check

One of the most marketed AI security narratives of 2026 is that agentic AI will handle 30% of Tier-1 SOC workflows by end of year. Microsoft, Palo Alto Networks, and others have made variants of this claim publicly, often attached to specific products — Microsoft Security Copilot, Palo Alto XSIAM, Google Security Operations (formerly Chronicle), Torq — with genuine AI-native SOC automation capability.

Our read on the reality, from speaking with practitioners and looking at deployment data, is different from the marketing. The 30% claim describes what agentic AI can handle in a well-instrumented environment with good data quality, mature playbook automation, and tight integration between the agent layer and the underlying telemetry. The actual realised figure in mid-2026 deployments is closer to 10% to 15% of Tier-1 workflows, trending upward as environments mature.

This is not a criticism of the vendors. The 30% number is a forward-looking statement that will probably be accurate in eighteen to twenty-four months. It is a mis-setting of expectation when used for near-term ROI modelling.

What is genuinely working today:

  • Triage assistance. An agent that summarises an alert, pulls related context from SIEM and EDR, and drafts an initial finding for an analyst to review. This is where most real productivity is landing.
  • Phishing email investigation. Parsing, sandboxing, IOC extraction, and user-notification generation is now reliably automatable.
  • IOC pivoting. Given one indicator, finding related indicators across the environment and enriching them from threat intelligence.
  • Log search translation. Natural-language-to-SIEM-query is a real productivity gain for analysts who are not SIEM experts.

What is not yet working at scale:

  • Autonomous incident closure. Agents that decide an alert is benign without human sign-off remain rare in production. The false-positive cost is still too high.
  • Cross-platform correlation across heterogeneous stacks. Agents work best in homogeneous environments; they struggle with the “three EDRs, two SIEMs, legacy IDS” reality of many large enterprises.
  • Novel attack detection. Agents are strong on known attack patterns, weaker on genuinely novel ones. This is a current limitation of LLM-based reasoning, not a deployment issue.

Our agentic SOC deep-dive covers the vendor landscape (Microsoft Security Copilot, Torq, Palo Alto XSIAM, Google SecOps, Exabeam) and the realistic deployment roadmap. The short guidance: start with triage assistance, measure the time savings honestly, expand to playbook automation where playbooks are mature, and treat autonomous-action-taking as a 2027 and beyond capability.

The regulatory picture

AI security regulation in 2026 is the most active regulatory area in enterprise technology. The main milestones:

EU AI Act reaches its major application date on 2 August 2026, when most of the regulation applies — including the full compliance framework for high-risk AI systems defined in Annex III. Prohibited AI practices and AI literacy obligations have been applicable since 2 February 2025. Obligations for providers of general-purpose AI models applied from 2 August 2025. The 2 August 2027 date applies to AI as safety components in regulated products (Annex I systems) and marks the end of the grandfathering period for GPAIMs placed on the market before 2 August 2025.

For security teams specifically, the EU AI Act’s Article 15 requirement that high-risk AI systems be designed to achieve “an appropriate level of accuracy, robustness, and cybersecurity” is the provision that pulls security directly into scope. Security teams that are not integral to the AI governance program are going to discover this the hard way in the second half of 2026. Our EU AI Act compliance guide for security teams covers the phase-by-phase obligations in detail.

NIST AI Risk Management Framework (AI RMF) remains the most operational framework for enterprise AI risk management. NIST has continued to develop profile-specific guidance (the Generative AI Profile, most notably). AI RMF is voluntary, but it is the most widely adopted framework by US enterprises and is increasingly referenced in procurement requirements and customer due diligence.

ISO/IEC 42001 is the international standard for AI management systems. Released in December 2023, it has been gaining certification traction through 2025 and 2026 as the external attestation complement to NIST AI RMF’s operational framework. Most mature AI governance programs now run NIST AI RMF as the operating model and certify to ISO 42001 for external assurance.

Sector-specific regulation is emerging. Financial services face AI model risk management requirements via SR 11-7 in the US, various national regulator guidance in the EU, and specific DORA provisions. Healthcare faces FDA AI/ML guidance in the US and EMA equivalents in the EU. Employment AI is under state-level attack in the US (New York City Local Law 144 on automated employment decisions, Illinois AI Video Interview Act, Colorado SB205) and under Article 6 of the EU AI Act.

Take the position: the organisations that will age well through AI regulation are the ones running NIST AI RMF + ISO 42001 as their baseline, with EU AI Act mapping layered on top for EU exposure. Treating each regulation as a separate program is inefficient and produces gaps. The frameworks are compatible by design; the operational programs should be too.

Post-quantum cryptography: the slow-moving wave

Post-quantum cryptography (PQC) does not belong only in the AI security hub — it is a cryptographic modernisation story, not an AI story — but it sits close enough that the adjacency matters. Gartner named PQC as the third top cybersecurity trend for 2026, and Gartner predicts that advances in quantum computing will render asymmetric cryptography unsafe by 2030.

The 2024 finalisation of the first NIST PQC standards (ML-KEM for key encapsulation, ML-DSA for digital signatures, SLH-DSA for stateless hash-based signatures) has moved PQC from research topic to implementation concern. The phrase that has taken hold is “harvest now, decrypt later” — the observation that attackers with the patience and storage to collect encrypted traffic today can decrypt it when quantum capability arrives, which makes the PQC migration timeline for long-lived sensitive data (financial records, health data, intelligence) much tighter than the nominal 2030 horizon.

The enterprise response looks roughly like this in 2026:

  1. Cryptographic inventory. Most organisations cannot currently produce a complete list of where asymmetric cryptography is in use across their estate.
  2. Crypto agility. Architectures that allow algorithm swaps without rewriting applications.
  3. Vendor alignment. Verifying that critical infrastructure and SaaS vendors have PQC roadmaps.
  4. Prioritisation of long-lived data. The sensitive data with a useful life beyond 2030 migrates first.
  5. Selective hybrid deployment. Production PQC deployment in specific high-sensitivity channels, with hybrid modes that combine PQC and classical algorithms during transition.

Our post-quantum cryptography enterprise roadmap covers this in the detail it deserves.

The twelve-month forward view

Five developments to track through the remainder of 2026:

  1. EU AI Act application on 2 August. Expect a scramble in Q2 and early Q3, followed by a cautious enforcement posture for the remainder of 2026. The first material enforcement actions will likely emerge in 2027.

  2. Agentic AI governance consolidation. The vendor landscape in AI agent identity and governance — Aembit, Astrix, Andromeda, plus the CyberArk, SailPoint, Okta responses — will consolidate. Expect two or three acquisitions in this space through late 2026 and 2027.

  3. AI-SOC deployment reality. The gap between the 30% Tier-1 automation claim and deployed reality will become harder to hide. Expect more measured vendor messaging by late 2026, and customer case studies that actually describe realised metrics rather than projected ones.

  4. Deepfake fraud scaling. The attacker economics are favourable and the defensive controls are under-deployed. Expect continued BEC loss growth with AI-generated variants as the growth segment.

  5. Shadow AI governance becoming mandatory. Currently optional in most organisations, shadow AI discovery and governance will become expected baseline by end of 2026, driven by a combination of EU AI Act scope determination requirements and internal data governance pressure.

The AI security program that ages well

The programs that will still be functioning in 2028 will share a small number of characteristics. They will have real inventories — of AI systems, AI agents, and AI tool use — not assumptions. They will treat AI agents as first-class identities with scoped authorisation, not as extensions of human users or service accounts. They will have AI defence platform coverage matched to the underlying security stack rather than layered awkwardly on top. Their SOC will use AI for triage and enrichment at scale, with human decision-making retained for the decisions that matter. They will run NIST AI RMF + ISO 42001 as the baseline, with specific regulatory mappings on top.

The programs that will not age well are the ones that treated AI security as a 2026 project — something to complete by August, and then return to the regular work. The work is the regular work now. The attack surface is growing faster than the defensive maturity. The regulatory posture is tightening. The budget trajectory is moving toward AI-amplified security at pace. This is the frontier that will define cybersecurity for the rest of the decade, and the posture organisations take over the next twelve months will determine how well they weather the following four years.

Frequently asked questions

Is agentic AI really the top 2026 cybersecurity concern, or is this vendor marketing? It is the top Gartner-identified trend, published 5 February 2026, and the underlying rationale holds: AI agents are proliferating rapidly via no-code platforms, they create new attack surfaces, and most organisations do not have the IAM, governance, or incident response playbooks to manage them. The vendor marketing around agentic AI is loud, but the underlying risk shift is real.

Which AI security platform should we buy? The question does not have a universal answer. Darktrace for heterogeneous environments wanting strong network-level AI detection. SentinelOne Purple AI for endpoint-integrated AI capability. Microsoft Security Copilot if you are deep in the Microsoft security stack. Vectra AI for NDR with interpretable detections. Palo Alto Prisma AIRS for consolidation into a platform vendor. The right answer depends on your existing stack and operating model more than on head-to-head comparison data.

Is the 30% AI-SOC automation claim real? It is a projection, not a current-state measurement. Realistic near-term automation is closer to 10% to 15% of Tier-1 workflows, with triage assistance and phishing investigation as the areas where productivity gains are most reliably realised. The 30% figure will probably be accurate in eighteen to twenty-four months in mature environments.

What is the single highest-value AI security control we could add? For most organisations, it is extending identity and access management to AI agents as first-class identities. Most other AI security controls depend on this foundation, and most organisations do not currently have it.

Do we need to comply with the EU AI Act if we are not EU-based? If your AI system is placed on the EU market, or its output is used in the EU, probably yes. The extraterritorial scope mirrors GDPR. Non-EU providers of AI systems used in the EU are directly in scope.

What is the relationship between NIST AI RMF, ISO 42001, and the EU AI Act? NIST AI RMF is a voluntary operational framework. ISO 42001 is an international management-system standard that you can certify against. The EU AI Act is regulation. Mature programs use NIST AI RMF as the operating model, ISO 42001 as the certifiable attestation, and EU AI Act mapping as the regulatory compliance layer.

Is post-quantum cryptography urgent or can we defer? For most organisations, the right posture is to start the cryptographic inventory now, invest in crypto agility, and prioritise migration for data with a useful life beyond 2030. The “harvest now, decrypt later” risk is real for high-value long-lived data. For most general enterprise use, the migration window runs through the next five to seven years.


This is the State of AI Security 2026 hub. It is refreshed quarterly because this is the part of the cybersecurity landscape that moves fastest. For deep dives on each topic, see our AI security category. For how we cover this beat, see our editorial standards page.