“We had security in place, so how did the ransomware get through?” – a poignant question that has often started many boardroom crisis meetings in the aftermath of a major cyber breach. At such a crucial point in time for many businesses, there is a lot at stake – brand reputation, compliance fines, loss of investor confidence, stock price fluctuation, and even ransom payment considerations.
And in reality, this question is far from out of place. Endpoint and perimeter security solutions are ubiquitous across business networks. This then begs the question: How is ransomware still getting through, and most importantly, why is it still able to spread quickly, wreaking havoc at such an alarming rate?
In this article, we will examine these questions. We will also rethink the traditional approaches that have dominated the endpoint security story up until now.
The story: Same script, different actors
In many cases, the approach by threat actors makes for a familiar plot with an eerily predictable ending, and it typically goes something like this:
Target low-hanging fruit such as an end user’s endpoint (credentials) or web front-end server
Use a combination of direct and indirect probing and social engineering for initial compromise
Discover what else can be accessed and pivot from the compromised machine
Escalate privileges to be able to move to other machines
Continue spreading to the high-value systems and then complete malicious objectives
Rinse and repeat
In this story, everyone is a target.
Take, for example, the contract developer working remotely to deliver a critical piece of business software for a client. They typically work to very tight deadlines, so are sometimes under pressure to possibly cut corners to complete projects before their deadlines. This type of endpoint user is an ideal candidate for a supply chain attack because if they’re compromised, their endpoint can be used to infiltrate the source code control (CI/CD) pipeline. Another example of targeted users are sales and marketing executives who are most often on the road participating in meetings and events. They also are at risk of social engineering and phishing attacks because they are more likely to be accessing public, unprotected networks.
Such users and their endpoints, once compromised, enable the threat actor to have a pivot point to continue their attack. They will then try to get into higher-privilege accounts and move to other machines before eventually getting to important systems such as the database systems, document management systems, or customer relations management systems. These are often the high-value assets which contain valuable business and personal information for the organization.
The status quo: Cyber defense fatigue
Threat actors have no shortage of techniques on their side – from fileless malware leveraging sophisticated code injection and highly evasive ransomware payloads to age-old vulnerabilities still effective due to organizations’ reliance on legacy systems hosting very old, but important, business code which may not be very easy to replace. It sometimes appears as though attackers have the advantage.
And they may well do because defenders face a barrage of threat actors with an increasing arsenal of malicious capabilities. As the saying goes, defenders have to get it right all the time while the attackers have to get it right only once. Because of this, defenders face most of the pressure the majority of the time.
Not to belabor the point further, but here is an example of the Brute Ratel payload and associated MITRE TTP map to show what defenders are up against from just one payload. And there are many different variants of malware with such capability.
In the image, the little dot on the far left-hand side is the Brute Ratel payload. To the right are the many different tactics and techniques that this single payload can employ to infect a system, evade detection, and carry on its malicious objectives.
This demonstrates part of the reason why breaches and ransomware cases continue to increase even through an impressive evolution of endpoint security tools – Antivirus (AV), Next Generation AV (NGAV), Endpoint Protection Platforms (EPP), Endpoint Detection and Response (EDR), etc. These tools combined with an even longer list of protection features – Signature Analysis, Application/Process Control, Heuristics, Behavioural Analysis, Exploit Prevention, Sandboxing, and the list goes on – which are supposed to solve the actual problem that now seems to have been exacerbated. The proliferation of ransomware over the last few years has proven the seriousness of the problem.
So why is malware still able to get through?
There may be a number of reasons for this. In some cases, the existing security, traditionally detection-first based systems, missed the threat altogether. This could be because of a zero-day vulnerability or highly evasive techniques. It could also be that a necessary security module was not configured properly or the right module was not even implemented at all due to budget constraints or false positives blocking day-to-day business use cases.
Against a backdrop of an almost endless list of attack capabilities such as the threat of end user and vendor-side vulnerabilities and the ever-increasing complexity in modern networks (hybrid environments), something is bound to go wrong. And this is why it usually does.
Cyber resilience: Zero Trust endpoint security
So what next? Enter cooperative security paradigm! This is traditional detection endpoint security married with the newer Zero Trust endpoint security approach.
It is a new security paradigm based on proactive security and Zero Trust principles working alongside existing security tools without the need for any network and systems changes. This capability uses multi-platform endpoint data which is centrally analyzed by an intelligent and scalable central brain.
It starts with non-disruptive deployment that happens in minutes. Then goes on to provide an insight into east-west communications in addition to the typically monitored north-south. This is necessary because organizations must have the ability to see and track communications across different operating systems, platforms, and locations simultaneously. And all this is possible without any additional systems and network changes. Here, we see an example of all different endpoints and server workloads including public cloud assets logically (without network changes) grouped by their designated location names.
Since endpoints do not exist in a vacuum, they communicate with many different systems. They may even try to communicate with other endpoints, although in some cases this communication is not actually required or desirable due to a high risk of lateral movement. This makes having visibility into endpoint communication a very important capability.
Endpoints also communicate with servers and workloads. Some of these may be under the control of the organization in their data centers while others may be third party cloud SaaS services. It’s necessary to get visibility into such communication to understand all the risks of those endpoints and the servers they connect to.
Visibility as explained above is a necessary precursor to always-on enforcement which provides containment in the wake of a detection failure by traditional endpoint security and EDR. Enforcement capabilities include identity and domain-based policies to kill off malware call-back communications, process-based rules for nanosegmentation, and communication control with firewall and vulnerability threat-based policies. This approach also means that whether there are Zero Trust policies or deny-list boundaries, security policy is defined and provisioned based on traffic intelligence. This intelligence should be on what systems we have, what they do, and then based of that, how to adequately protect them. Location awareness also improves policy flexibility on and off the corporate network. All this should be uniformly applied across different platforms – Windows, Linux, Mac, Unix, Cloud and Containers.
Achieving these objectives from a defender’s point of view requires teams and tools to collaborate effectively. An organization that assumes breach is likely to be focusing on cyber resilience rather than detection alone. They put as much importance on containment strategies as detection.
This type of containment approach was recently put to the test by Bishop Fox. Through multiple attack emulations, they found that breaches were detected up to 4 times faster with Illumio Zero Trust Segmentation because attackers had to circumvent these containment methods, creating more noise for detection tools to pick up on.
Overall, segmentation on endpoints leads to a positive security posture that, in collaboration with existing security investments, makes for a great endpoint story!
Want to learn more about Illumio Endpoint? Contact us today for a free consultation and demo.