Adaptive Segmentationmicro-segmentation June 3, 2020

Can you measure the efficacy of micro-segmentation?

Raghu Nandakumara, Field CTO

“If you cannot measure it, you cannot control it” – Lord Kelvin 

Quantitative measurements inform everything we do – whether it’s comparing different products, determining the success of a project or tracking the development of a sports team. We are able to make real, objective, “like for like” comparisons, as opposed to solely relying on subjective opinions. Yet when it comes to many of the security products for the enterprise, we seem to be happy with products that will make us more compliant, improve our security, or provide a better way of detecting threats – more / improve / better are all qualitative measures – this is a curious approach. Increasingly, however, we are seeing that the savvy Security buyer is now asking for the numbers to back up vendor claims. They’re asking questions like, “how will the success of that product or solution be measured?”

The outpouring of articles in the security media over the last few years make it clear that micro-segmentation is now an essential security control for organisations, “table stakes”, if you will, in any Security strategy. In particular, the central role of micro-segmentation in any Zero Trust strategy is unsurprising given it limits lateral movement and impedes an attacker’s ability to navigate the network to find the intended target. Micro-segmentation is the quintessential example of “least privilege” - only allowing things to communicate that should be allowed to communicate, nothing more, nothing less. However, those implementing (or contemplating implementing) micro-segmentation have historically lacked quantitative measures to demonstrate its efficacy.

At Illumio we felt it was important to quantitatively demonstrate the benefits of micro-segmentation, how the impact changes as the size of environment increases, and a clear testing methodology that could be repeated by any organisation that wishes to validate these results in their own environments. To achieve this, we partnered with red team specialists Bishop Fox to conduct and document an industry-first blueprint for how to measure the efficacy of micro-segmentation based on the main components of the MITRE ATT&CK® framework.

The Bishop Fox team were tasked with finding a pair of ‘crown jewel’ assets in a test environment over a series of rounds. Think of this like a ‘capture the flag’ exercise, but with no blue team to actively defend the environment. In total, there were four rounds of testing including the control test. With each test, the micro-segmentation policy became tighter and tighter:

  • Control Test – No micro-segmentation controls in place (essentially a flat network)
  • Use Case 1 – Environmental Separation (i.e. the micro-segmentation is pretty coarse-grained and ensures that workloads in different environments – production, testing, development – can only connect to other workloads in the same environment)
  • Use Case 2– Application Ringfencing (i.e. the next level of micro-segmentation granularity where only workloads associated to a specific application (e.g. payments processing application or HRM application) in a specific environment can talk to each other. Think of this as really tightening the noose)
  • Use Case 3 – Tier Segmentation (i.e. this is one of the most fine-grained forms of micro-segmentation policy and ensures that only workloads associated to a specific tier (e.g. Web tier or DB tier, etc.) in a specific application in a specific environment can talk to each other)

The Bishop Fox team had no prior knowledge of the test environment, and the entire environment was destroyed and rebuilt for each test. Meaning that in each round of testing, nothing carried over, in particular topology and IP addresses were blown away. All the micro-segmentation policies were defined using a white list or default deny approach – i.e. rules were written to explicitly allow authorised traffic, thus anything without a rule was, by default, not permitted and therefore blocked. The initial set of tests were done on a 100-workload environment (considerably smaller than the deployment size of most medium sized organisations), with repeats of Use Case 2 at 500 and 1000 workloads. 

Here were the observations for tests involving 100 workloads:

BishopFoxTable1

This data shows that even a very simple environmental separation policy (Use Case 1)  provides at least a 300% increase in difficulty for an attacker to enumerate and reach its target. And the relatively low incremental effort of applying Application Ringfencing policies (Use Case 2) results in 450% increase in difficulty for an attacker. But it’s not just the obvious increase in effort that makes micro-segmentation a compelling security control. If you pay attention to the increase in the number of Blocked Connections you’ll see that the potential for detection of an attacker attempting unauthorised connections makes the incremental effort an appealing investment for the defender. This is also shown in the volume of traffic generated across the Control, Use Case 1 and Use Case 2 tests – unless attempted over a very long period of time, these spikes in connection volumes should trigger alerts in the SOC, leading to investigations.

Also interesting is the drop off in connections attempted in Use Case 3 (in which tier segmentation was applied). This drop is explained by the fact that the environment was so tightly segmented (think Boa Constrictor at lunch time) that the adversary was forced to change tactics (vs. Use Case 2). However, this change in tactic didn’t ultimately result in a more efficient win for the attacker. In fact, despite the comparatively low effort for the defender to tighten the micro-segmentation policy, the total time to success still increased and was 950% above the control experiment, which indicates that the increase in restrictions in the security policy force a material change in the attacker’s approach, compared to previous use cases, and even this change left them significantly worse off.

The headline from this first round of tests – wouldn’t you like to make the adversary’s job anywhere between 3x – 10x more difficult? If so, implement micro-segmentation.

Here is the data for Use Case 2 (Application Ringfencing) at 100, 500 and 1000 workloads:

BishopFox2

The key takeaway here is that as the size of the protected estate increases, the attacker’s job gets measurably more difficult (between 4.5x and 22x), even with no changes to the nature of the segmentation policy implemented – a strong justification for looking beyond a highly tactical segmentation deployment, and taking the jump to extend this capability set across the entire estate as the benefits are real.

So, what does this all mean – here are my final thoughts:

  1. Micro-segmentation needs to take a white list approach – only by taking this approach can you truly measure the improvement in security posture as you are eliminating everything but approved pathways.
  2. Micro-segmentation, even with a very simple environmental separation policy, makes it at least 3 times more difficult for an attacker to achieve their outcome – of its own strong justification for investing in this capability.
  3. Increasing the deployment size of micro-segmentation without needing to change the policy definition results in overall security benefits of itself – organisations should aim to extend segmentation to their entire estate, not just a small tactical subset.
  4. Increasing sophistication of a micro-segmentation policy forces a change in approach by the attacker, often at a cost of time and heightening the chances of detection.

Check out this video for a summary of the findings:

 

For more information, download a copy of the report today and join us for a live webinar on Tuesday, June 16.

Adaptive Segmentationmicro-segmentation
Share this post: