Security

For CISOs who don’t wannacry next time: how to take your incident response to the next level

Published on 24 May 2017

Wannacry made the headlines for all the wrong reasons. So what can we learn from it?


Earlier this year the Wannacry cyberattack hit some of the world’s most high profile (and important) organizations. The consequences were bad, but they could easily have been much worse had the attack not been stopped and contained after a few days.

Now, with Wannacry an unpleasant recent memory, it’s time to take stock of what happened and why, asking ourselves what we can do to prevent similar attacks in the future.

(Note: because it was the most critical time for any organization responding to the attack, for the purposes of this blog I’m going to focus on the period between 12 and 15 May 2017.)


Why did Wannacry spread so fast?

Wannacry began in earnest on 12 May 2017 when a previously unknown ransomware (leaked by The Shadow Brokers) hit internal networks and spread fast, without any user interaction. A significant aspect of the malware was its worm that exploited an unpatched bug in Windows-based systems.

Famously (at least now), many of the systems affected by Wannacry were still running unsupported versions of Windows, with the NHS in particular relying heavily on XP devices that were connected to LANS. These old systems became the attack surface for Wannacry, which then spread to unpatched new core systems.

This combination of unsupported old tech and unpatched new tech meant that the malware could move quickly through entire networks, causing outages as it went.


How did organizations tackle it?

At Balabit we were constantly receiving information from our customers and colleagues during the Wannacry outbreak. At the same time, we were helping organizations to minimize risk, while analyzing our own exposure.
It meant we had a unique insight into how people responded as the attack took place and took hold. Broadly speaking, the fightback consisted of these five steps.

 

Step one: Isolation

With infected endpoints needing to be isolated as soon as possible, IT teams all over the world were ripping out power cables as soon as they saw the malware.

Step two: Information gathering

With the problem (relatively) contained, it was time to figure out what it was, how it worked and how it could be managed.

National CERTs released the official alerts on the Friday morning, just after the initial attack. But the most efficient platforms for information sharing were Twitter and security blogs, as well as informal communications between companies.

Step three: Network segmentation

Wannacry affected SMBv1, so the protocol had to be filtered out from the network traffic. It was a risky decision, as no one could know which services would be affected. But it was necessary in order to prevent the malware from spreading and to keep business processes alive.

Step four: Implement countermeasures

Initial IOCs were shared in the security community on the afternoon of 12 May.  Meanwhile, anti-virus vendors shared their signatures for Wannacry. However, it still took several hours for many to update IDS and firewall rules, AV systems and Windows clients and servers.

Step five: Go home and hope

As @malwaretechblog, with the help of Darien Huss, found the “kill switch”, the spread of Wannacry dramatically slowed. However, security teams still feared what might come next, whether it would be a new variant, misconfiguration or their organization being one of the unlucky few to make the headlines.

What can we learn about best practice from the Wannacry incident?

If there’s one key lesson any organization can take from Wannacry, it’s that preparation and review are everything. Particularly as there are many out there who did not experience Wannacry and, as such, have not yet noticed that their incident management processes don’t work well in a crisis.

The first step for these organizations should be to consolidate and log all the information provided by existing security tools, using a central solution that can find potential IOCs with a single search.

For those with more mature processes (and who might have experienced the five stages listed above during Wannacry), it’s more important to use a privileged session management solution to reconstruct who did what in the network over that weekend. By reviewing exactly what has happened on the administrators’ screen they can quickly highlight sessions that contained risky activities. This is vital when fast reaction to critical issues often carries the risk of human error.

Whatever your personal experience of Wannacry was, there is something to learn from it. With Privileged Access Management solutions and logging tools, it’s far easier to know what did happen, what might happen, and how to prevent unwanted attacks on your network.

If you want to be better prepared against the next Wannacry, read our Faster Incident Response eBook. If you would like to see Balabit in action, register for a demo today.

by Csaba Krasznay

Csaba Krasznay is Balabit's Security Evangelist. He is responsible for the vision and strategy of Balabit's Privileged Access Management solutions. He was elected to the “Most Influential IT Security Expert of the Year 2011”.

share this article
Mitigate against privileged account risks
Get in touch

Recent Resources

The top IT Security trends to watch out for in 2018

With 2017 now done and dusted, it’s time to think ...

The key takeaways from 2017’s biggest breaches

Like many years before it, 2017 has seen a large ...

Why is IT Security winning battles, but losing the war…?

When a child goes near something hot, a parent will ...

“The [Balabit] solution’s strongest points are the privileged session management, recording and search, and applying policy filters to apps and commands typed by administrators on monitored sessions.”

– The Forrester Wave, Privileged Identity Management, Q3 2016, by Andras Cser