Your Business Just Got Hit by Ransomware: The 72-Hour Response Playbook
If you are reading this because it has just happened to you, stop for a moment. The single most damaging thing you can do in the next five minutes is act on instinct. Most of the expensive mistakes in a ransomware incident are made in the first hour, by people who are frightened, who feel responsible, and who start pulling plugs and rebooting servers before anyone with authority has been woken up.
This playbook is organised by time — the first hour, the first six hours, the first day, days two and three — because that is how the decisions actually present themselves. It is written for UK and international organisations, with specific attention to the regulatory clocks that start the moment you become aware of the incident. It is not a replacement for a retainer relationship with a qualified incident response firm, and if you do not have one in place before the incident, getting one is the first substantive action in this playbook.
The premise of this guide is uncomfortable: the first 72 hours are rarely won by tools alone. They are won by clarity — clear roles, clear priorities, clear communication — especially when information is incomplete and the ransom note is timing out on the boardroom wall.
The decisions that lock in the cost of everything that follows
Before the hour-by-hour sequence, there are four decisions that determine whether this incident costs you £80,000 or £8 million. They are made in the first hour, often by the wrong people, and they are almost always made without understanding what is being decided.
The containment-versus-forensics decision. Isolating infected machines as fast as possible limits spread. Powering them off destroys the memory-resident evidence that tells you how the attackers got in and what they took. Most ransomware playbooks on the internet tell you to isolate immediately — this is correct — but many of them also suggest powering off, which is wrong. You disconnect the network; you do not shut down the machine. This preserves volatile evidence that your forensics team needs to understand the scope.
The “when did we become aware” decision. This matters because every regulatory clock starts from the moment of awareness. In the UK, the ICO requires notification of a personal data breach within 72 hours. If you sit on discovery for 48 hours while you try to figure out what is happening, your window to investigate and notify shrinks accordingly. Awareness is typically earlier than people want to admit, and incident timelines are later reviewed by regulators with the benefit of hindsight.
The “who is the incident lead” decision. Not the most senior person. Not the CISO automatically. The person who is actually running the response — making tactical calls, coordinating external parties, deciding what gets restored first. In most incidents this person gets appointed in the first thirty minutes by accident, then becomes unable to sleep for three days. Appoint deliberately. Give them authority to spend money and pull people off other work.
The “do we touch the ransom note” decision. The answer is no. You do not engage. You do not negotiate. You do not indicate willingness to pay. Any engagement with the threat actor should be through a qualified ransomware negotiator, typically engaged through your cyber insurance panel, and only after containment and initial forensics are underway. We cover this decision at length in our guide on ransomware negotiation and when (not) to pay.
Get these four decisions right and every later step becomes easier. Get them wrong and the rest of this playbook is damage limitation.
Hour 0 to Hour 1: Confirm, contain, do not panic
Someone has seen a ransom note, or files have started renaming themselves, or a server is behaving in a way that cannot be explained any other way. Before anyone does anything further, confirm that this is a ransomware incident and not a lookalike event. Simultaneous mass encryption is distinctive; it is also occasionally mimicked by storage failures, destructive wipers without ransom notes (which are a different incident type), or wide-scale backup corruption. The response differs based on what is actually happening.
Once confirmed:
Isolate, do not power off. Disconnect affected machines from the network — unplug the ethernet cable, disable the wireless adapter, or block at the switch. Leave the machine running if you can. Powering off destroys forensic evidence in memory that your response team will need.
Stop all communication on potentially compromised channels. Assume that your corporate email, your corporate instant messaging, and any device that was on the affected network may be compromised. Move the response team to out-of-band communications — personal phones, personal email, or a separate tenant that was not touched. This is not paranoia. Threat actors routinely monitor the victim’s response communications during the negotiation window.
Assemble a small response core. In the first hour you do not want twenty people. You want four or five: the incident lead, a senior technical decision-maker (your head of IT or MSP lead), a senior business decision-maker (CEO or COO for smaller organisations, the CEO’s direct report for larger ones), and someone to take notes. Everyone else is a distraction until the core has stabilised.
Start a written timeline immediately. From this point onwards, every action, every decision, every communication gets timestamped and recorded. You will need this for the regulator, for your insurer, for any litigation, and for the post-incident review. Use a document that lives outside the affected environment. Notes on paper are fine. Notes in a Google Doc that the threat actor cannot see are better.
Do not restart, do not “try something,” do not let well-meaning staff attempt to fix it. The urge to do something is strong and almost always wrong in the first hour. Containment and evidence preservation are the only legitimate actions until the response team is properly assembled.
Hour 1 to Hour 6: Notification and the resources that will save you
This is the window where the quality of your pre-incident preparation becomes visible. Organisations that have a cyber insurance policy with an incident response panel, a retainer with a qualified IR firm, and a documented escalation procedure move through this phase quickly. Organisations that do not spend the next six hours trying to find phone numbers.
Notify your cyber insurer immediately. Most cyber insurance policies require notification within 24 hours of becoming aware of an incident. Late notification is one of the most common reasons for coverage disputes. The insurer will typically route you to a panel incident response firm, panel legal counsel, and panel negotiation firm — you do not want to source these yourself during a crisis. For context on how cyber insurers structure these requirements and what they will demand evidence of, see our cyber insurance requirements guide.
Engage qualified incident response support. If you already have a retainer, activate it. If you do not, your insurer’s panel is the fastest route. The NCSC maintains an assured Cyber Incident Response (CIR) scheme in the UK, and organisations with any complexity should use a CIR-assured provider. Do not try to handle forensics in-house unless you have a genuine in-house forensics capability — most organisations that think they do, do not.
Engage legal counsel early. Not in week two, now. The incident has regulatory, contractual, and potentially litigation consequences, and a great deal of the communication that follows is better handled under legal privilege. Your insurer’s panel counsel or your own external firm’s cyber practice is the right call.
Understand the UK reporting clocks. If the incident involves personal data — and a ransomware incident almost always does, if only through compromised HR files or customer contact databases — the ICO expects notification within 72 hours of becoming aware of the breach. If your organisation is a regulated entity under the NIS regime, separate notification obligations apply to the relevant competent authority. Under the forthcoming Cyber Security and Resilience Bill (expected during 2026), regulated entities will face a 24-hour initial notification requirement followed by a 72-hour fuller report. For NHS trusts, specific notification to the Department of Health and Social Care and the NCSC within 24 hours applies. If you are in a regulated sector, your regulator has its own notification rules layered on top.
Notify the NCSC. UK organisations can and should report ransomware incidents to the National Cyber Security Centre. This is separate from the ICO notification and serves a different purpose — the NCSC will provide advice and, depending on circumstances, can actively assist with response. Reporting through the NCSC route also feeds national threat intelligence that helps protect other potential victims.
Do not notify customers or the public yet. Public communication in the first six hours is almost always premature. You do not yet know the scope, the data impact, or the recovery timeline, and every inaccurate statement early becomes a legal and reputational problem later. Prepare a holding statement that acknowledges an incident without committing to detail, and have it ready for the moment you do need to communicate.
Confirm backup integrity. Check whether your backups are accessible, whether they have been encrypted or deleted, and when the most recent clean backup was taken. This is often the moment organisations discover that their backups have been failing quietly for months, or that the threat actor reached the backup infrastructure before triggering encryption. If your backups are intact and immutable, your negotiating position is transformed. If they are not, the calculus changes significantly. We cover the specific backup architecture that survives ransomware — and what cyber insurers now require — in our immutable backups guide.
Hours 6 to 24: Scope, decisions, and the first communication wave
By hour 12 you should have forensics engaged, legal counsel active, insurer notified, and a rough picture of scope forming. The work of this window is turning that rough picture into something decisions can be made on.
Scope the incident properly. Which systems are affected? Which data was accessible to the threat actor? Is there evidence of exfiltration (data being copied out) in addition to encryption? Ransomware groups in 2026 almost uniformly operate on a double-extortion model — they encrypt your files and threaten to publish data they have already stolen. Understanding exfiltration scope is a separate investigation from understanding encryption scope, and both need to happen.
Identify the threat actor and their typical patterns. Your IR firm will do this. Knowing which ransomware group is involved tells you a great deal about what to expect — their typical dwell time before encryption, whether they have reliable decryption tools, whether they have a history of honouring “no publish” agreements after payment, and whether they are subject to OFAC or UK sanctions that make payment legally problematic.
Determine restore priorities. Not “everything at once.” The mature approach is to define a restore sequence based on business continuity priority: identity and access services first (domain controllers, authentication), then core infrastructure (email, file services), then revenue-critical applications, then everything else. Restoring in order of “what’s yelling the loudest” is a classic mistake that creates cascading problems.
Prepare to communicate. By hour 18 to 24, if the incident is significant, you will need to communicate. Employees need to know what to do and what not to do. Key customers or partners may need notification, particularly if your services depend on theirs or vice versa. A holding statement that is factual, clear, and committed to regular updates is far better than either silence or detail you cannot yet support.
Good communication in this window looks like this: “We are responding to a cybersecurity incident affecting some of our internal systems. Our security team is actively working with specialist partners to contain and understand the situation. As a precaution, please do not access company systems until further notice. We will provide updates every four hours as the situation develops. If you observe anything unusual, please contact [named person] on [out-of-band phone number]. Please do not discuss this incident externally or on social media until we have issued an official statement.”
Bad communication in this window looks like this: Silence. Speculation. Detail about scope that later turns out to be wrong. Denial that anything is happening when employees can plainly see that email is down. Commitments to full recovery timelines that cannot be met.
Do not engage the threat actor directly. By hour 24 the ransom timer is usually the loudest thing in the room. Ignore it. Any engagement must go through a qualified negotiator, typically via your insurer’s panel. Most negotiators will recommend an initial acknowledgement — a “we are assessing” message that buys time without committing to anything — but even this requires expert judgement on timing and wording.
Hours 24 to 48: Recovery planning and the hard decisions
By day two, the initial chaos has subsided. Forensics has preliminary findings, restore priorities are defined, and the full scope of the incident is clearer. This is when the hardest strategic decisions get made.
Decide on the recovery path. There are typically three: (1) restore from clean backups if they exist and are verifiably uncompromised; (2) rebuild affected systems from scratch, accepting data loss; (3) engage in negotiation to obtain decryption tools, typically as a last resort or where backups are inadequate. These paths are not mutually exclusive — a mature response often involves all three in parallel for different system classes — but the primary path needs to be agreed explicitly. The factors that drive this decision include backup integrity, the critically-affected data’s backup age, the business cost of each hour of downtime, and the specific threat actor involved.
If the payment question arises, it is a board-level decision. Not an IT decision. Not a CISO decision alone. The question of whether to pay a ransom engages questions of sanctions law (OFAC in the US, UK financial sanctions), insurance coverage, ethical position, likelihood of recovery, and reputational risk. The NCSC and UK law enforcement do not encourage or endorse ransom payments. The decision framework for when payment is even on the table is complex enough that we treat it separately in our guide on ransomware negotiation — but the headline is that payment is a last resort, decided by the board with legal and insurance advice, after all other options have been exhausted.
Communicate more fully. By day two, the scope is clearer and the holding statement needs replacement. Employees need fuller guidance, customers may need notification (particularly if data is involved), and depending on the organisation’s profile, media enquiries may need a response. A full communications plan — drafted with legal counsel — should be active by hour 36. The NCSC publishes specific guidance for CEOs on incident communication that is worth reading before rather than during a crisis.
Begin segmented restoration. Where clean backups exist and forensics has identified the initial access vector, you can begin restoring systems to a segmented environment — not back into the original network, which is still treated as contaminated. Restored systems come online in a hardened configuration with credentials reset and the initial access vector closed. Rushing systems back into the original environment without this step is how organisations get reinfected within days.
Reset credentials everywhere. Not selectively. Everywhere. Ransomware attacks typically involve credential theft, and you do not know which credentials were compromised until forensics completes. The safer assumption is that every credential is compromised and every credential needs rotation. This is disruptive. Do it anyway.
Hours 48 to 72: Toward stabilisation and regulatory compliance
By day three, the organisation should be stabilising even if not fully recovered. This window is about closing the loop with regulators, insurers, and stakeholders, and about moving from firefighting into structured recovery.
Meet the ICO notification window. If personal data is involved and you have not yet notified the ICO, this is the hard deadline. The notification does not need to be complete — you can, and are expected to, update it as investigation progresses — but the initial notification must be made within 72 hours of awareness unless you can demonstrate why it was not feasible. Late notification creates a separate regulatory problem on top of the incident itself.
Complete the regulator stack. Beyond the ICO, sector-specific regulators (FCA for financial services, Ofcom for telecoms, the relevant NIS competent authority) have their own notification requirements. Your legal counsel should be driving this workstream.
Brief the board properly. The board will have been receiving updates throughout, but by day three they need a structured briefing: what happened, current scope assessment, recovery trajectory, regulatory exposure, insurance position, and decision points still outstanding. This briefing is also where decisions about broader external communication — press, major customers, investors — are made or confirmed.
Continue restoration in sequence. Business-critical systems should be coming back online in the hardened restore environment. Lower-priority systems wait their turn. Resist pressure to accelerate this for political reasons — a hasty restore that brings a compromised system back into the production environment undoes the work of the first 48 hours.
Do not declare victory. By hour 72 the instinct is to say “we are through it.” You are not. A significant proportion of ransomware victims are hit again within six months, often by the same group or an affiliate, because the initial access vector was never fully closed or because the organisation’s detection and response capability remains inadequate. Treat the first 72 hours as the containment phase. Full recovery is measured in weeks, and the hardening work that prevents recurrence is measured in months.
What the first 72 hours cannot fix
Three things about ransomware incidents are worth stating honestly, because the playbooks rarely say them.
The first 72 hours are shaped entirely by the preceding 72 weeks. Organisations with tested incident response plans, verified immutable backups, current cyber insurance, a retainer with a qualified IR firm, and segmented network architecture move through a ransomware incident at a fraction of the cost of organisations without those things. None of this preparation can be assembled during the incident. If you are reading this before an incident, the useful action is to close those gaps now.
The ransom payment, if it happens, is usually the smallest cost. The Information Commissioner’s Office, regulators, litigation, customer churn, remediation, hardening, and operational disruption typically total between five and twenty times the ransom demand. Organisations that focus the response entirely on whether to pay and at what price are optimising the wrong variable. The cost function is dominated by downtime and regulatory consequence, not ransom.
Recovery is not a return to the previous state. The organisation that emerges from a ransomware incident in week 12 is not the same organisation that entered it in week 1. Systems have been rebuilt, architectures tightened, policies rewritten, personnel moved or exited, and the board’s view of cybersecurity has shifted permanently. Organisations that try to restore exactly what they had before are missing the point and usually end up back in the same position within a year. Organisations that treat the incident as a forcing function for the overdue hardening work come out materially stronger. The difference is mostly about leadership posture in weeks two through twelve, not about the first 72 hours.
A final note, because it matters. If you are reading this during an active incident, you are doing the right thing by looking for structure. You are also operating on adrenaline and probably no sleep, and the people around you are doing the same. Every ransomware response degrades over the second and third day as people exhaust. Build shifts. Send people home to sleep. Your incident lead in particular needs forced rest by hour 30 or their judgement will be worse than the judgement of someone half their seniority who is rested. This is not a luxury. It is operational necessity, and organisations that ignore it make worse decisions in the critical decision windows on day two and three.
Frequently asked questions
Should we pay the ransom?
Not as a first resort. Not without qualified negotiation. Not without board-level decision-making. Not without legal advice on OFAC and UK sanctions exposure. And not until every alternative recovery path has been genuinely assessed. The NCSC and UK law enforcement do not encourage or endorse payment, and paying increases the probability of being targeted again. The full decision framework — including the scenarios where payment becomes the least-bad option and where it is legally impossible — is covered in our ransomware negotiation guide.
Who do we have to notify and when?
In the UK, if personal data is involved, the ICO expects notification within 72 hours of awareness. Sector-specific regulators have their own clocks. NHS trusts have specific obligations to DHSC and NCSC within 24 hours. Cyber insurers typically require notification within 24 hours of awareness. Customers and partners may have contractual notification obligations that predate the regulatory ones. Your legal counsel should be driving the full notification matrix.
Should we call the police?
For ransomware incidents affecting UK organisations, reporting to Action Fraud (the UK’s national fraud and cybercrime reporting centre) and to the NCSC is appropriate. Serious incidents may involve the National Crime Agency directly. Law enforcement involvement does not slow recovery and sometimes accelerates it — the NCA and NCSC have visibility into threat actors that individual victims lack.
Can we handle this without outside help?
Almost no organisation should try. Ransomware response requires specialised forensic tooling, experience with specific threat actor patterns, and legal expertise that does not sit inside most organisations. Even large enterprises with mature internal security teams typically engage external IR firms for forensic depth. The cost of trying to handle a serious incident in-house almost always exceeds the cost of engaging specialists.
What if we do not have cyber insurance?
You can still respond, but every step is harder. You will be sourcing IR firms, legal counsel, and negotiators from scratch at the worst possible moment. You will be paying out of pocket rather than against coverage. And you will be making decisions about recovery and notification without the advisory support that insurance panels typically provide. Post-incident, obtaining cyber insurance will be significantly more difficult and expensive — insurers treat recent unresolved incidents as a material risk factor.
How do we prevent this happening again?
The honest answer is that the hardening work takes months and involves architectural changes, not just tool purchases. The most common gaps that allow initial access — unpatched perimeter services, weak identity and access management, inadequate email security, missing or unmonitored endpoint detection — are not fixed overnight. Organisations that emerge from a ransomware incident without addressing these structural issues often face a second incident. A managed detection and response service closes the gap on the “unmonitored endpoint” problem for most mid-market organisations — we compare the major providers in our MDR platforms guide. Immutable backup architecture closes the “backup was also encrypted” problem and is now a cyber insurance requirement for most carriers.
What if our backups are encrypted too?
This is common. Ransomware groups actively target backup infrastructure as part of the attack, and backups that were “offsite” but network-reachable are frequently encrypted along with production. If your backups are gone, the recovery options narrow significantly — typically some combination of rebuilding from scratch, recovering what can be extracted from individual endpoints, and, as a last resort and only after formal decision-making, negotiation. The architectural fix for this is immutable backups, which cannot be encrypted or deleted by an attacker even with full administrative access — this is covered in our immutable backups guide.
How long does full recovery take?
For a mid-sized organisation hit by a significant ransomware incident, full operational recovery typically takes four to twelve weeks. The first 72 hours is containment. Weeks one through three are primary restoration. Weeks three through eight are hardening and architectural improvements. The incident continues to consume management attention and cost through month three and often beyond. Regulatory consequences, customer trust rebuilding, and any associated litigation can extend the timeline much further. Organisations that underestimate this timeline usually end up back in crisis because they declared recovery complete before it was.