Here's what I’m reading this week:
#Petya/#NotPetya: The big news this week has of course been the newest worm of the hour – a strange piece of ransomware modeled on the well-known Petya ransomware, but seeming to be neither quite the same as its source, nor clearly designed to be effective as, well, ransomware. There's been plenty of coverage of Petya already (read a useful overview here), and there will be more. Here are the three most useful (and frightening) insights that I’ve gleaned:
Petya, like WannaCry and Adylkuzz, capitalizes on the drive for flatter networks.
Conficker hit the internet in November of 2008. There haven't been many high-profile worms (that is, self-propogating malware) since then. But in the last decade, there has been a constant press to open up our environments and flatten our networks. This has been driven largely by organizations looking to make their compute more dynamic. The further we move into the cloud, virtualization, and containers, the further we get away from the infrastructure of the network. The more dynamic the network gets, the harder it is for traditional network segmentation (best practice in network security for decades) to keep up.
Less segmentation means that modern networks are an especially fertile target for worms – perhaps as much or more than they have ever been in the past. WannaCry and Adylkuzz capitalized on this, and Petya does that one better by adding PSExec and WMI exploit-techniques to its catalog of lateral movement options.
Everyone is still saying that patching is the answer, and patching is part of the solution. But Petya spread in part via patched hosts. What these new worms are really exploiting is the flat nature of modern networks. Until we figure out how to make segmentation and network controls keep up with the dynamic data center, we’re going to see more and more of these self-propagating threats.
Petya isn't very good ransomware, it seems oddly targeted at Ukraine, and its developers had access to the NSA exploits before the public ShadowBrokers dump.
Put these details together, and Petya starts to look less like a criminal act, and more like another example of Russia targeting its opponents. We certainly don't know for sure, but if we've learned one thing about cyber operations from Russia is that while it’s often hard to be 100% certain who is behind it, there is plenty of proof if you know where to look.
Here’s a good walkthrough of the analysis we have so far and what it might mean: "Petya: 'I Want To Believe'."
Petya used an automatic update as one of its initial attack vectors, and that’s really bad.
In many ways, automatic updates are a weak point in our security. We blithely download and execute new code – often without even realizing it – to keep our systems patched and up-to-date. We do this because we know that patched systems are more secure, and because we trust the companies that develop them.
Except that many people don't regularly update, because they don't notice their systems are out of date, or because they don't trust the patches themselves. For years we have pushed people to update, and worried that someone might publicly use updates as an attack vector and risk undermining the entire "you must patch" message.
And this week it happened. Petya raises lots of questions, but perhaps its most troubling effect will be if it discourages users from patching, because every user that doesn't patch is more vulnerable, and lots of vulnerable users make all of us vulnerable.
So if you're reading this, yes: Petya was distributed in part through tainted updates, and no: that doesn’t mean that updates are a bad idea. Please, please still update. Thank you.
McKinsey/Cybersecurity Strategy: Earlier this week McKinsey posted a podcast with Sam Palmisano (former CEO of IBM, now chairman of the Center for Global Enterprise) and I. In it, we talk about the impact of President Obama's Cybersecurity Commission, how to build a strategic model to think about cybersecurity, and how to differentiate between buzzwords and real innovation.
I'm reading: "Finding a strategic cybersecurity model."
- A recent study from Trustwave provides a string of data about environmental control and lateral movement. Perhaps most interesting: the median time from compromise to remediation sits at 62 days. This is a substantial improvement over data sets (some put it as high as 200+days), but it’s hard to compare across research. Also, intruders can still regularly spend 2 months (!) inside compromised networks before they are caught. If this is progress, we still have plenty of work to do.