AI Security

AI Security

Agentic AI governance, LLM security, AI-powered defence, and the reshaping of the SOC.

The annual state-of hub

Permanent URL · Annually refreshed

Pillar coverage

Sub-categories

6 areas of coverage
01 AI Threats

Deepfakes, AI-generated phishing, AI-enabled malware, and the offensive side of generative AI that security teams are actually seeing in the wild.

View all AI Threats articles
02 AI Defence

Darktrace, Vectra, SentinelOne Purple AI, and the rest of the AI-native detection and response vendors — what the technology actually does versus what the marketing claims.

View all AI Defence articles
03 LLM Security

Prompt injection, OWASP LLM Top 10, enterprise LLM gateway controls, and the hardening patterns that apply to both in-house and hosted models.

View all LLM Security articles
04 Agentic AI Security

Gartner's top 2026 trend. Identity, access, tool-use control, and auditability for non-human agents running inside enterprise environments.

View all Agentic AI Security articles
05 AI-Powered SOC

Tier 1 analyst automation, agentic triage, and whether the AI SOC actually reduces alert fatigue or just relocates it.

View all AI-Powered SOC articles
06 AI Governance

EU AI Act and NIST AI RMF implementation — shared with the Compliance category, framed from a security perspective.

View all AI Governance articles

Recent in this category

AI Security

AI security is the category with the fastest-moving vendor landscape, the loudest marketing noise, and the least amount of independent coverage on the internet. It is also the category where the site’s content velocity advantage is largest — because Gartner named agentic AI the number-one cybersecurity trend of 2026, and most of what has been written about it so far is either vendor marketing, speculation, or both.

We take positions in this space that other publications hedge on. Some of what vendors claim about “AI-native” security is marketing. Some of it is genuinely transformative. The distinction between the two is where the editorial work lives, and we are willing to state it plainly.

What this category covers

Agentic AI security is the strategic centre of gravity — the CISO’s governance playbook and the AI agent identity management guide are the anchor pieces, and they map the control gaps that emerge when you give LLM agents real tools and real access to real systems.

LLM security covers the hardening patterns that every enterprise deploying generative AI needs to implement — prompt injection defence, OWASP LLM Top 10 controls, gateway architectures, and the operational practices that separate organisations treating LLM security seriously from those deploying and hoping.

AI defence covers the vendor landscape — Darktrace, Vectra, SentinelOne Purple AI, Microsoft Security Copilot, and the rest of the AI-native detection and response field. Our comparison articles contain no affiliate commissions from any of these vendors.

AI threats covers what defenders are actually seeing: deepfake-enabled BEC, AI-generated phishing campaigns, voice-cloning CFO fraud, and the shifting economics of attacks where generating convincing content is no longer a bottleneck.

AI SOC covers the reshaping of security operations around autonomous agents — where Tier 1 analyst automation actually works, where it breaks, and how the vendors are pricing against SIEM and MDR incumbents.

AI governance is shared with the Compliance category. The EU AI Act and NIST AI RMF are compliance instruments, but the control implementation is a security function. We cover both sides.

What we believe about AI security

A handful of editorial positions that shape coverage here:

Most “AI-powered” security claims are pattern matching on statistical anomalies — which is useful, but it is not agentic AI. The distinction matters because buyers are being asked to pay premium pricing for platforms that do what their SIEM already does with slightly better UX.

Genuine agentic AI creates governance problems that traditional identity and access management was never designed for. Agents carry delegated authority, chain tool calls, and take actions across systems. The control frameworks most organisations have will not cover this. The ones being published by the platforms will, to varying extents, try to sell you on theirs. We have a view on what actually matters in agent identity management, and it is not the same as what any individual vendor is selling.

The AI SOC is not yet a replacement for human Tier 1 analysts across most environments. It is, in the right conditions, a material reduction in alert-fatigue cost. The conditions matter. We are specific about them.

Shadow AI — employees using unapproved AI tools with sensitive data — is the category’s highest-incidence risk and its least-covered topic. Our shadow AI governance article is one of the few pieces on the internet that takes the detection and governance problem seriously rather than recommending a blanket ban.

Publishing cadence

This category runs at two to three pieces per week because the space moves that fast. State-of-AI-security is refreshed quarterly — four times a year rather than annually, because the vendor landscape and threat landscape both evolve faster than the annual hub model can track. See the State of AI Security 2026 hub for the current synthesis.