Immutable Backups vs Traditional Backup: The Cyber Insurance Requirement That Saves You
Coalition’s claims data tells a story that ought to settle every remaining argument about backup architecture: in 94% of ransomware incidents where they paid claims, the attackers had actively targeted the backup infrastructure. Not encrypted the production environment and moved on — actively, deliberately gone after the backups first, because modern ransomware operators understand that an organisation with clean, restorable backups is an organisation that doesn’t pay.
This is the context in which cyber insurance underwriting has changed. The question is no longer “do you have backups?” — it’s “do you have backups that can survive an attacker who has already compromised your domain administrator credentials?” Traditional backups cannot. Immutable backups can. And insurers have written that distinction into their underwriting questionnaires.
If your last backup architecture review predated 2023, you are almost certainly not insurable at the terms you had last time, and you are almost certainly more exposed than you think.
What immutability actually means
An immutable backup is one that cannot be modified, encrypted, or deleted for a specified retention period — even by an administrator with full credentials, even by the backup software itself, even by the attacker who has compromised your environment. The data is written once and then frozen in place until the retention clock expires.
This is a technical property, not a marketing term. The implementation mechanisms are specific: Write-Once-Read-Many (WORM) storage, object lock on S3-compatible storage, proprietary append-only filesystems, or hardened repositories where the operating system itself enforces immutability below the application layer. A backup labelled “immutable” that can be disabled by an admin with a password is not immutable. A backup protected only by “delete protection” that can be turned off is not immutable. Underwriters in 2026 know the difference, and forensic firms hired after an incident verify it.
The reason this matters is straightforward. Modern ransomware operators follow a predictable playbook: initial access through phishing or exposed credentials, privilege escalation to domain administrator, lateral movement, discovery of backup infrastructure, destruction or encryption of backups, then — and only then — detonation of the ransomware payload against production systems. If the backups cannot be destroyed, the final stage of that playbook fails and the attacker’s leverage collapses.
Traditional backup isn’t enough and hasn’t been for years
Traditional backup architectures were designed to solve different problems: hardware failure, accidental deletion, data corruption, site loss. They assumed the adversary was entropy, not a human operator with domain admin rights and time to explore your network before attacking.
Under that older threat model, keeping your backup server on the same domain as production was fine. Scheduling backup jobs with service accounts that had broad access was fine. Storing backups on network-attached storage accessible from production was fine. A properly configured traditional backup system reliably protected against the failure modes it was designed for.
Those design assumptions no longer match the threat. An attacker who reaches domain admin level can enumerate your backup infrastructure within minutes, authenticate to backup consoles using credentials harvested in transit, and either encrypt the backup repository directly or delete retention policies so that existing backups age out before the ransom deadline. This isn’t hypothetical — it’s the documented procedure of essentially every sophisticated ransomware group currently operating.
The controls that protect traditional backups against this — network segmentation, separate credentials, multi-factor authentication on backup consoles, alerting on deletion attempts — are good controls and should be in place. But they are defence-in-depth measures around a foundation that assumes the attacker stays outside. Immutability assumes the attacker gets in and makes the architecture survive anyway.
The 3-2-1-1-0 rule
The traditional 3-2-1 backup rule — three copies, two different media types, one off-site — has been extended twice to address the current threat model. The modern version is 3-2-1-1-0:
- 3 copies of data
- 2 different media types
- 1 copy off-site
- 1 copy immutable or air-gapped
- 0 errors during recovery verification
The fourth “1” is the ransomware-resistant copy. The “0” is the less-discussed but equally critical requirement: you must actually test restores, and those tests must succeed. Insurers in 2026 routinely ask when your last documented restore test was, and increasingly request the test results themselves. A backup you’ve never restored from is a hypothesis, not a backup.
The rule is deliberately redundant. An immutable copy on the same site as production is better than nothing, but doesn’t protect against physical loss. An off-site copy that’s not immutable survives physical loss but not an attacker who reaches it. Together, the layered architecture survives both failure modes.
Immutable vs air-gapped — they’re not the same thing
The terminology here gets used loosely, but the distinction matters for insurance attestations.
Immutable means the data cannot be changed for a defined retention period by any actor, including administrators. The storage layer enforces this — typically through object lock, WORM, or proprietary append-only architecture. Immutability is usually online or near-online: the data is accessible, it just can’t be modified.
Air-gapped means the backup is physically or logically disconnected from the network most of the time. Tape backups in offsite storage are the classic example. A backup appliance that is powered off except during ingestion windows is a logical air gap. Air-gapped copies are slower to restore from but structurally invisible to an online attacker.
The strongest architectures use both. An immutable online copy provides rapid recovery from most incidents. An air-gapped copy — whether that’s physical tape, a cloud “vault” service, or a powered-off secondary appliance — provides the last-resort protection if the immutable copy is somehow compromised (typically through time-based attacks that manipulate retention clocks, or through compromise of the backup vendor itself).
Insurers accept either as a standalone control in most cases, but they pay particular attention to organisations that have both. A layered architecture is reflected in premium pricing.
What insurance carriers now require
The underwriting consensus across major carriers (Coalition, At-Bay, Cowbell, Beazley, CFC, and the Aon/Marsh brokerage books) has converged on a predictable set of expectations for 2026:
- Immutability on at least one backup copy, with retention sufficient to cover detection-to-recovery dwell time (typically 30 days minimum, 90 days for higher coverage tiers)
- Documented restore tests within the last 90 days — a test older than that is treated as stale
- Backup infrastructure isolation from the production domain, with separate credentials, separate MFA, and ideally separate directory services
- Multi-factor authentication on backup consoles, preferably phishing-resistant for higher limits
- Retention policies that cannot be shortened without quorum approval or a cooling-off delay that exceeds likely incident detection time
- Logging and alerting on any attempted modification or deletion of backup data or policies
This is specifically what carriers verify through forensic investigation after a claim. A material misrepresentation on the application — claiming immutability when the backups can be deleted by a domain admin — is grounds for claim denial. This is not theoretical. It is a routine reason for denied claims.
For organisations entering renewal season, the practical checklist is: screenshots of immutability settings, a dated report of the last restore test, documentation of the isolation model, and proof of MFA on backup consoles. If you cannot produce those four things on request, your renewal is at risk.
The vendor landscape
The 2025 Gartner Magic Quadrant for Enterprise Backup and Data Protection Platforms placed Rubrik, Veeam, Commvault, Cohesity, Dell Technologies, and Druva in the Leaders quadrant. Each takes a different architectural approach to immutability, and the right choice depends on your environment, budget, and operational maturity.
Immutable backup platform comparison
| Vendor | Immutability approach | Strengths | Trade-offs | Typical fit |
|---|---|---|---|---|
| Rubrik | Proprietary append-only filesystem; immutable by architecture, not configuration | Strongest out-of-box security posture; logical air gap native; dedicated ransomware response team | Hyperconverged appliance model; higher upfront cost; less hardware flexibility | Mid-market to large enterprise wanting turnkey cyber-resilience |
| Veeam | Hardened Linux repositories; S3 Object Lock; immutability configurable at storage layer | Hardware-agnostic; broad workload coverage; lowest TCO when built well; very large ecosystem | Immutability depends on correct implementation; more operational burden; Windows-based console historically a target | Infrastructure teams with strong in-house expertise wanting flexibility |
| Cohesity | SpanFS with DataLock; WORM timer started on ingestion; quorum-based policy changes | Strong governance controls; multi-person authorisation for retention changes; FortKnox cloud vault option | Shared-resource architecture (backups + file services); can have resource contention at scale | Large enterprises prioritising governance and insider-threat mitigation |
| Commvault | Air-gap and immutable options via Cloud Rewind, Cleanroom Recovery; object lock support | Broadest platform coverage (including mainframe and legacy); strong cloud-native recovery | More complex to administer; higher skills requirement; premium pricing on full feature set | Globally distributed enterprises with heterogeneous estates |
| Druva | Fully cloud-native SaaS; immutability enforced at AWS storage layer; no on-prem infrastructure | Zero management overhead; fast time-to-value; native SaaS/endpoint/cloud coverage | Cloud-only model; restore speeds limited by network egress for very large datasets | Cloud-first organisations, SaaS-heavy stacks, and distributed workforces |
Two things to bear in mind when reading any comparison of this category. First, the platforms are closer in capability than the marketing suggests — all five can be configured to meet cyber insurance requirements, and the differentiators are increasingly about operational experience and total cost rather than raw capability. Second, the right choice depends heavily on your existing infrastructure. An organisation already running Veeam with disciplined infrastructure hygiene is rarely better off switching to Rubrik; the transition cost exceeds the architectural upgrade. An organisation starting from scratch or running legacy backup products that no longer meet insurer requirements has more genuine optionality.
The shortlist beyond the Gartner Leaders is narrower but worth noting: Zerto (for continuous data protection and disaster recovery), HYCU (for Nutanix-centric and SaaS-first shops), and Acronis (common in the MSP channel). For small business environments, the calculus is different and the SMB-focused options in our backup guide for small businesses apply.
Common implementation failures
Even organisations that buy the right platform frequently implement it in ways that fail to deliver the protection they paid for. The common failure modes:
Domain-joined backup infrastructure. The backup repository sits on the same Active Directory domain as production. When the attacker compromises domain admin, they compromise backup admin. If this describes your environment, you do not have immutable backups in any meaningful sense — you have a backup repository waiting to be deleted.
Shared administrator credentials. Backup admins are also production admins, using the same accounts. Credential theft on production compromises backup simultaneously. Backup infrastructure requires separate identities.
Retention policies that can be shortened. Some platforms allow retention to be reduced by administrators without a cooling-off period or quorum approval. This is a trivial attack path — shorten retention to zero, wait for aging, job done. Configure retention changes to require multi-person approval or a 7-to-14-day delay that exceeds typical attacker dwell time under active response.
Unverified restores. The backup exists. The restore has never been tested. During an actual incident, the restore fails for a reason that would have been caught in a quarterly test. This is the single most common unpleasant surprise in incident recovery.
Untested recovery at scale. A successful restore of a single VM does not prove you can restore 400 VMs before your downtime cost exceeds your coverage limit. Test at representative scale at least annually.
Ignoring the control plane. Immutable data is protected. The backup control plane (the console, the catalog, the orchestration layer) often is not. Attackers who cannot delete the data can still lock you out of the system that manages it. Control plane survivability is a distinct concern.
How this lowers your premium
Carriers underwrite based on expected loss. An organisation with immutable backups and tested restores has a demonstrably lower probability of paying a ransom, a lower expected business interruption loss, and a shorter recovery timeline — all of which feed directly into pricing models.
The pricing delta isn’t trivial. Organisations that present a documented 3-2-1-1-0 architecture with current restore test evidence routinely see 15% to 30% premium reductions relative to similar organisations without it. That’s on top of the primary benefit, which is being insurable at all — increasingly, the binary question is not what you pay but whether you can get coverage. Independent analysis (cross-referenced against cyber insurance requirements for 2026) shows roughly 41% of cyber insurance applications are being denied on first submission, and inadequate backup posture sits in the top three reasons alongside missing MFA and inadequate EDR.
The financial case for immutable backups is therefore not really about the technology cost versus the ransom cost — it’s about the technology cost versus the combined cost of premium increases, coverage denials, extended downtime during restoration, and the loss of negotiating leverage when an attacker knows you can’t recover without them.
Frequently asked questions
Are cloud backups automatically immutable? No. Cloud storage can be configured to be immutable (S3 Object Lock, Azure Blob immutable storage, GCP Bucket Lock) but is not immutable by default. Verify that your cloud backup configuration explicitly enables object lock or equivalent, and verify the retention period.
How long should the immutable retention period be? At least 30 days as a minimum. 90 days is the current underwriting sweet spot for mid-tier coverage. Longer retention costs more storage but substantially increases protection against low-and-slow attacks where attackers dwell in the environment for weeks before detonating.
Is tape still relevant in 2026? Yes, for air-gapped protection at large scale. Tape is slow to restore from and operationally cumbersome, but for organisations with petabyte-scale protection requirements it remains the cheapest true air gap. For mid-market organisations, cloud-based “vault” services from major backup vendors have largely replaced tape’s role.
Do MSPs and MSSPs provide immutable backup as a service? Many do, and for small and mid-sized businesses this is often the most practical path. The caveat: verify that the MSP’s implementation actually meets immutability requirements as defined by your insurance carrier, and ensure you have direct contractual rights to the backup data in the event the MSP relationship ends.
Can immutable backups be compromised? In theory, yes — through time manipulation attacks on retention clocks, through compromise of the backup vendor’s cloud infrastructure, or through supply-chain attacks on the backup software itself. These are rare and require significantly more attacker sophistication. Major vendors have addressed time manipulation through internal monotonic clocks. Supply-chain risk is real but small compared to the risk from traditional backup architectures.
What’s the minimum for a small business? A local backup appliance with immutable retention (many SMB-focused solutions from Datto, Acronis, Veeam Backup for Microsoft 365 and similar offer this), combined with an immutable cloud copy using object lock. This satisfies most SMB-tier cyber insurance requirements and provides meaningful ransomware protection at modest cost.
How do we evidence this for our insurer? Screenshots of immutability configuration showing retention periods; a dated restore test report from within the last 90 days; a diagram showing backup infrastructure isolation from production; MFA enforcement evidence on backup consoles; a copy of your written backup and recovery policy. Package these into a “backup evidence pack” and submit proactively with your renewal application.
The one thing to do this week
If your backup architecture was designed before 2023, the single most valuable action is not adopting new technology — it’s verifying whether your existing backups can actually be destroyed by a compromised domain admin. Walk through it: if an attacker had your domain admin credentials right now, how many clicks would it take them to delete your last 30 days of backups? If the answer is “more than zero and less than ten”, you don’t have immutable backups — you have traditional backups with a lock on the door that the attacker has the key to.
Fix that, document the fix, and you have addressed more of your insurance exposure in one change than most organisations address in a year. Everything else — the vendor selection, the architecture refinement, the 3-2-1-1-0 layering — follows more easily once the foundational question is answered.
Ransomware operators are professional, methodical, and well-funded. They have thoroughly studied how to defeat backups that were designed for a different era. Immutability is the architectural response to that reality. It’s also, as of 2026, the minimum bar for being insurable. Those two pressures point in the same direction, which is a rare clarity in a field full of trade-offs.