jan-karel.com
Home / Security Measures / Executives & Governance / Incident Response and Crisis Management

Incident Response and Crisis Management

Incident Response and Crisis Management

Incident Response and Crisis Management

Cyber Risk Is Board-Level Work

Board-level calm doesn't come from optimism, but from clear responsibility and demonstrable follow-up.

In Incident Response and Crisis Management, the core is governability: owner, standard, deadline and regular feedback.

This way, this topic becomes not a periodic discussion, but a governable part of regular business operations.

Immediate measures (15 minutes)

Why this matters

The core of Incident Response and Crisis Management is risk reduction in practice. Technical context supports the choice of measures, but implementation and embedding are central.

Incident or crisis -- what is the difference?

Not every security alert is a crisis. Understanding the difference prevents both underestimation and unnecessary panic.

Incident Crisis
Definition A security event that can cause damage but is manageable with standard procedures An incident that escalates into a threat to the continuity, reputation or financial position of the organization
Example Phishing email opened by an employee, malware on a workstation, lost laptop Ransomware encrypts all servers, data breach with customer data in the news, attacker has access to the entire network
Who acts? The security or IT team, within existing procedures The crisis team including board, legal, communications and external parties
Decision level Operational Strategic -- board-level decisions required
Time pressure Hours to days Minutes to hours

Rule of thumb: An incident becomes a crisis when it threatens the operational capacity of your organization, when external parties become involved (media, regulators, customers), or when the financial or reputational damage is potentially large. In doubt? Treat it as a crisis. De-escalating is always easier than escalating too late.

The incident response lifecycle

Effective incident response follows six phases. Each phase has specific tasks for management.

1. Preparation

This is the phase you're in now -- before anything has happened. Preparation is by far the most important phase and paradoxically the phase that gets the least attention, because it doesn't feel urgent. Nothing is burning. No journalist is calling. There's always something more pressing.

But preparation determines everything that follows. An organization that is well prepared recovers in days. An organization that isn't struggles for weeks or months.

What the board must arrange: - An incident response plan that is current, brief and concrete -- not a document of a hundred pages that nobody reads - A crisis team with clear roles, including a spokesperson, a legal advisor and a decision-making board member - Contracts with an incident response firm so you don't have to find one in the middle of the night - Regular exercises (see the section on tabletop exercises below)

2. Detection and analysis

The sooner you detect an incident, the smaller the damage. The average time between a breach and its discovery is globally around 200 days. Two hundred days during which an attacker can move undisturbed through your network, copy data and install backdoors.

What the board must know: Investing in detection (monitoring, logging, a Security Operations Center) is not a luxury but a necessity. Ask your CISO: how quickly do we detect a breach? If the answer is "we don't know," you have a problem.

3. Containment

Putting out the fire -- or at least preventing it from spreading. This is the moment when difficult decisions are needed. Do we take systems offline? That costs revenue. Do we isolate the network? Then employees can't work.

What the board must decide: How much operational damage do we accept to prevent further spreading? This is a business decision, not a technical decision. The IT team can advise, but the board must make the trade-off.

4. Eradication

Completely removing the attacker from your systems. Finding backdoors, closing compromised accounts, patching vulnerabilities.

What the board must know: This takes time. Days to weeks. And it's crucial that it's done thoroughly, because an attacker who isn't completely removed will come back.

5. Recovery

Bringing systems back online. Restoring backups. Resuming services. A well-organized backup and disaster recovery process determines how quickly you're operational again.

What the board must decide: In what order do we recover? What is business-critical? Do we accept temporarily reduced functionality?

6. Evaluation (lessons learned)

The most underestimated phase. After a crisis everyone wants to forget it as quickly as possible and return to normal. But without honest evaluation you'll make the same mistakes again.

What the board must do: Schedule the evaluation before the crisis is over -- otherwise it gets postponed until it's forgotten. Ask the uncomfortable questions: what should we have done differently? Where did our preparation fail? And most importantly: what are we going to concretely change?

Your role as a board member during a crisis

As a board member, you're not the one configuring the firewall or analyzing the log files. Your role is strategic. You make three types of decisions that nobody else can make.

Decisions about business continuity. Do we take systems offline? Do we switch to manual processes? Do we inform customers about delays?

Decisions about communication. What do we tell whom, when? Communicating too early with incomplete information can cause panic. Communicating too late undermines trust and can have legal consequences.

Decisions about resources. Do we bring in an external incident response team? What may that cost? Do we need legal advice? And does a cyber insurance policy apply that covers the costs of external expertise?

Key advice: Your most important task during a crisis is to stay calm and make clear decisions. You don't need to understand all the technical details. You need to understand what the impact is, what the options are, and what the consequences are of each option.

The first 24 hours -- a playbook

The first hours after discovering a serious incident are decisive. This playbook gives you guidance.

Time Action Who
T+0 min Report received -- assess severity IT/security team
T+15 min Activate crisis team, schedule first briefing Crisis manager
T+30 min Take initial containment measures (isolate systems if needed) IT/security team
T+1 hour First crisis team meeting: situation overview, initial decisions on containment and communication Crisis team
T+2 hours Legal assessment: is there a notification obligation? Engage external incident response team if needed Legal, board
T+4 hours Internal communication to employees: what's going on, what is being done, what's expected of them Communications, HR
T+4-8 hours Assess whether notification to the Data Protection Authority is needed -- legal deadline is 72 hours Legal, privacy officer
T+8 hours Second crisis team meeting: progress, strategy adjustment, decisions on external communication Crisis team
T+12 hours Communication to directly affected customers or partners if needed Communications, board
T+24 hours Status update to crisis team and board. Establish plan for the next 48-72 hours Crisis team

Notification obligations -- who must you inform when?

After a cyber incident you are in many cases legally obligated to report. Not reporting can lead to substantial fines and reputational damage that is greater than the incident itself.

Authority When to report Deadline What to report
Data Protection Authority (AP) In case of a data breach with personal data that poses a risk to data subjects Within 72 hours of discovery Nature of the breach, estimated number of affected individuals, possible consequences, measures taken
Data subjects (customers, employees) If the data breach poses a high risk to their rights and freedoms As soon as possible after reporting to the AP What happened and what they can do themselves
NIS2 supervisor In case of a significant incident affecting services (for organizations that fall under NIS2) Initial report within 24 hours, followed by a more detailed report within 72 hours Nature and severity of the incident, cross-border impact, measures taken
Contractual parties When the incident affects services to customers or partners, or when contractual notification obligations exist According to contractual agreements Depending on the contract -- check your data processing agreements and SLAs
Sector-specific supervisor Financial sector (DNB), telecom (Telecom Agency), healthcare (NZa) Varies by sector Sector-specific

Note: The 72-hour deadline with the AP starts running from the moment of discovery, not from the moment you have the full picture. You can make an initial report with preliminary information and supplement it later. Not reporting because you "don't know everything yet" is not a valid reason and can prove costly.

Crisis communication

Communication can make or break a crisis. Poorly communicated incidents often cause more damage than the incident itself.

Internal communication

Employees would rather hear the news from you than through the media. Inform them early, honestly and concretely. Tell them what happened (as far as known), what the organization is doing, and what's expected of them. Are there systems they shouldn't use? Do they need to change their password? Is there a central point where they can ask questions?

Communication with customers and partners

Be honest without unnecessary technical detail. Customers want to know: is my data affected? What should I do? What are you doing to prevent recurrence? Tell them what you know, admit what you don't yet know, and promise an update as soon as you know more.

Media

The media will come on their own. Prepare a standard statement that is factual, brief and human. Designate a spokesperson and ensure that everyone in the organization knows that only the spokesperson speaks to the media. Nothing is as damaging as contradictory messages from different departments.

Communication with regulators

Proactive, factual and cooperative. Regulators appreciate organizations that are transparent. Never try to hide anything -- it always comes out and the sanctions become much heavier.

Golden rule: Communicate early, communicate honestly, communicate regularly. Silence is always filled by speculation, and speculation is always worse than reality.

Tabletop exercises for management

A tabletop exercise is a simulated crisis at the meeting table. No live systems, no real attack -- just a scenario, a group of decision-makers, and the question: what do we do now?

The goal is not to get it "right." The goal is to discover what you don't know, where your plan has gaps, and who bears which responsibility.

How does it work?

A facilitator presents a realistic scenario in phases. After each phase the team discusses: what do we know, what do we decide, who does what? The facilitator adds complications -- a journalist calls, a critical system turns out to be unrecoverable, an employee leaks information on social media.

Three scenarios to start with

Scenario Key questions for the board
Ransomware encrypts all servers on a Friday evening Do we pay the ransom? How do we communicate with customers who expect a working system on Monday? When do we report to the AP?
Data breach: customer data is on the internet How big is the damage? Do we inform all customers or only the affected ones? What do we tell the media?
Insider threat: an employee has been copying data for months How do we handle the involved employee? What are the legal implications? How do we prevent this in the future?

How often to practice?

At least twice per year. Once per year is too infrequent -- by then the previous scenario is already forgotten. Alternate the scenarios. Involve different participants. And evaluate after each exercise: what action items come from this? Who owns them? When will they be resolved?

Practical tip: Invite an external incident response team to facilitate the exercise once. They bring realistic scenarios and ask the questions you don't dare to ask internally.

Further reading

A cyber incident rarely affects only your own organization. In many cases the causes -- or consequences -- lie in the chain of suppliers and service providers you work with. How you map and manage those dependencies is covered in the next chapter: Supply chain and supplier risk.

Do this this month

Topic Yes/No Action needed
There is a current incident response plan (updated in the last 12 months)
The crisis team is composed with clear roles and responsibilities
There is a contract with an external incident response firm
Legal advice is available within 4 hours of an incident
There is a communication plan for internal and external crisis communication
The notification obligations (AP, NIS2, sector-specific, contractual) have been mapped
At least two tabletop exercises per year have been conducted
Backups are regularly tested for recoverability -- not just for existence
Contact details of the crisis team are also available when systems are down

The most important lesson: A cyber crisis is not a question of if, but when. Organizations that prepare, survive. Organizations that don't, pay a price many times higher than the investment in preparation. Start today.

Op de hoogte blijven?

Ontvang maandelijks cybersecurity-inzichten in je inbox.

← Executives & Governance ← Home