The SolarWinds compromise and its ongoing fall-out have brought into sharp focus the difficulty in controlling and validating every touch point an enterprise has with its external dependencies (be that vendor, customer, or partner) and further emphasises the old adage that “a chain is only as strong as its weakest link.”
The Target breach from the early 2010s and the more recent so-called ‘Cloud Hopper’ attacks already highlighted risks associated with supply chains, particularly when members of that chain have some form of direct access to your network (i.e., third-party relationships that require network access). The acknowledgment of this risk has thus resulted in more focus on security controls in place at these network touch points. Where implemented properly, these controls provide a clear understanding of where these touchpoints exist, what they allow access to, the level of access, and how that access is monitored. From a detection and response perspective, it may provide a more obvious set of indicators to watch for because of this ‘a priori’ information.
First and foremost, the SolarWinds compromise is unnerving because it attacked a point in our technology supply chains that we least expected (i.e., a signed update from a software vendor we trust). And yet, it is one we would all acknowledge as being ‘perfect’ in the opportunity it offered the backdoor attackers.
We can argue that the probability of an attack’s success is a function of available credentials, usable network access, and exploitable vulnerabilities. In this case, compromising a system with extensive network access and highly privileged access to systems and directory services would be extremely high on any malicious actor’s wish list. The Orion platform provided this exact opportunity.
How the attackers managed to compromise SolarWinds’ code validation and build process is still under investigation, and no doubt all organisations that do their own development will be hugely interested in how they can mitigate the same risks.
What is top of mind now is what the latent exposure continues to be – as it is still being revealed. The level and range of access available mean that it’s difficult to truly gauge where the attackers have been able to get to and hide. We know that the use of the Orion update was purely to establish a (highly prominent) beachhead in the target organisations, as neither the end trophy nor a necessity for persistence of presence or access.
Given this, the focus is very much on continued detection and response. And in this respect, all of us have a critical role to play. The discovery of this large-scale attack was possible because a seasoned incident response and cybersecurity vendor (FireEye / Mandiant) was also compromised. They were, however, the first on the list of victims to detect data theft after the breach, to disclose it publicly, and then to provide initial countermeasures. Perhaps, this is one of the silver linings in this ongoing episode.
Security vendors need to determine, as a matter of urgency, how the solutions we provide can be best used to detect and then limit (or at least delay) the propagation of the attack.
- Do we have telemetry that Blue Teams can effectively utilise to build a more accurate picture of activity within an organisation, providing clearer indicators of compromise?
- Are there policies that can be quickly applied that will restrict lateral movement?
- Can our technology solutions expose – and potentially even mitigate – the risk that an application creates for an organization, be it through the amount of connectivity, the use of privileged access, or the number of vulnerabilities on the workloads?
- Can we quickly identify when compromised credentials are being used and prevent further access?
- Are there easy correlations that can be drawn from multiple security solutions that would identify known and new TTPs?
And what about the potential target organisations: what do they need to do? Now, more than ever, understanding the cyber risk associated with each asset is paramount.
- Can they quickly identify their most critical resources?
- How well do they understand the access model around these resources?
- Can they control outbound communication to and from these servers/workloads? Should outbound communications even exist to begin with?
- Is there adequate monitoring at each layer (e.g., end-user device, network, identity and access management, application, etc.) and can it be improved?
- Can access policies be tightened to get closer to Zero Trust/least privilege?
- Will secure software development practices be revisited to ensure they are as strong as possible?
The infamous ‘Shock Doctrine’ paradigm suggests that in order to force a transformative change in a system, the entire system needs to be shocked into realising that such a change is necessary. And while the focus will rightly be on the security of and risk associated with supply chains, I believe the real metamorphosis needs to be as follows:
- Technology vendors MUST be able to educate their customers on how the existing solutions that they provide can be used to support detection and response functions.
- Customers (i.e., organisations) MUST focus on:
- Identifying their most critical assets.
- Actively monitoring these assets using a variety of sensors to identify indicators of malicious TTPs.
- Understanding the access model for these assets and enforcing least privilege policies to protect them.
In many ways, the opportunity presented here is really the starting paradigm of Zero Trust: “Assume breach – and make it really hard to get owned.”