/
Cyber Resilience

Continuously Testing the Effectiveness of your Zero Trust Controls

When we hear security practitioners, vendors and their customers talk about the Zero Trust framework, we see a lot of love given to five of the core pillars: Devices, Data, Workloads, Network and People – all very tangible ‘assets’ that need protecting, and for which a wide variety of capabilities exist to help achieve this protection.

Zero Trust diagram

Zero Trust’s ‘belt and braces’

A holistic Zero Trust strategy should consider and provide coverage for each of these five pillars. But your strategy is not complete, and might not even get off the ground, if you don’t have a story around Automation & Orchestration and Visibility & Analytics – these are figuratively (and literally if you look at the diagram above!) the ‘belt and braces’ that hold the 5 Zero Trust pillars together. Sadly, they often tend to be the most neglected in real-world Zero Trust journeys.

Why? Automation and Visibility can be the most costly and complex areas for vendors to deliver in their security offerings and customers often lack the expertise to properly automate or analyse.

You can’t segment what you can’t see

At Illumio, we think of these two areas (Automation and Visibility) as core pillars in their own right rather than an afterthought. The journey we’re privileged enough to help our customers take as they set out to achieve their micro-segmentation outcomes starts with “Visibility and Analytics." We build a detailed application dependency map, leveraging telemetry from workloads and metadata from a CMDB, to provide actionable traffic reports from which customers can start building their segmentation policies to establish microperimeters around their applications. In this case, Visibility isn’t the icing. It’s the cake.

No single vendor can “Zero Trust-ify" you

Through an appreciation of the fact that all enterprises, even the seemingly simplest, are complex organisms with an equally complex and diverse technology stack, “Automation and Orchestration” has been a ‘must have’, core part of our product from the outset. Our product is designed to be integrated into other systems and accessed programmatically via our open and documented APIs. In fact, the product UI is a skin on top of our REST APIs. We would go so far as to argue that there is no Zero Trust without Automation and Orchestration.

How do I know if this stuff works?

Our typical customer journey follows these steps:

  1. Get telemetry and metadata to build a map
  2. Use the map to build micro-segmentation policy
  3. Test policy before enforcing
  4. Enforce policy

And with monitoring in place throughout, we know when there is a violation of a defined policy, and users can take the necessary remediation actions.

So, what’s the point of all this? Being able to understand how a specific Zero Trust control functions is hugely valuable (e.g., micro-segmentation policy matches/violations), but what about the effectiveness of the control in the greater context of an organisation’s overall Zero Trust strategy?

In this era of "assume breach," should there be a security incident, how quickly can your organisation answer the what, when, who, how and why questions? And most importantly which systems in your current arsenal can work in unison to help you get to the answers of these questions automatically and accurately?

MITRE I take a moment to digress?

Let’s take a quick aside and talk about the MITRE ATT&CK framework for a second.

The MITRE ATT&CK framework maps out adversarial tactics, techniques and procedures (TTPs) that bad actors leverage to mount an attack - for example an advanced persistent threat (APT)-based attack on a target. Using this information and the shared common knowledge of an attacker’s behaviour as they go about leveraging these TTPs, an organisation can develop defensive strategies to limit (and ideally prevent) the negative impact of these malicious activities. Further, the framework starts from a position of Assume Breach and thus is entirely around post-compromise defence – ‘assume you’ll get breached, so focus on making it really hard to get pwned’. From a Blue Team’s perspective, the ATT&CK framework, with its emphasis on having access to as much event data from relevant sources as possible, informs the process by which this data can be aggregated and correlated to properly identify malicious behaviours and, in turn, drive the necessary responses. MITRE’s own ATT&CK 101 blog post is an excellent starting point for all things ATT&CK.

Measuring the efficacy of micro-segmentation

During the recent work on Testing the Efficacy of Micro-Segmentation, red team specialists Bishop Fox began by mapping the relevant parts of the MITRE ATT&CK framework to the techniques they would look to be leveraging in their attempt to ‘capture the flags.’

BishopFoxMITRE

This identification of adversarial techniques then allowed them to determine how effective the Illumio Adaptive Security Platform was in helping to detect and defeat these attacks. MITRE has a great write up on how the ATT&CK framework can be used to effectively find cyber threats.

So, with a security toolset that provides high fidelity visibility, full access via its API, and offers a modelling framework such as MITRE ATT&CK, organisations are able to build tooling that can monitor Zero Trust controls, analyse telemetry, and respond automatically to take the appropriate action. But how do you monitor the effectiveness of this tooling?

Making continuous testing part of your Zero Trust DNA

One option is to, of course, hire an independent red team specialist to perform the role of an attacker, while the organisation’s blue team leverages their carefully built analytics and security controls to monitor and respond. This is hugely valuable and recommended periodically. What if there was a way that both the red team activity and blue team response could be automated? Organisations could continuously test the effectiveness of their modelling and controls and take an approach of constant improvement. And this is exactly what vendors like AttackIQ are now making possible. Through their technology, customers can both validate the effectiveness of a specific security control and, perhaps more interestingly, can determine how their defenses line up against sophisticated adversaries.

At Illumio, we are excited to partner with AttackIQ in the launch of their Preactive Security Exchange program because we understand that customers need to be able to measure and see value in their Zero Trust investments. The highly configurable, automated, repeatable testing platform that AttackIQ provides makes measuring the efficacy of Zero Trust controls an achievable goal for organisations. And as we know, once you can measure something you can start improving it.

Check out our Zero Trust security page to learn more about how Illumio can help you on your Zero Trust journey.

Related topics

No items found.

Related articles

What President Biden’s New Security Policy Means for the Future of Cyber
Cyber Resilience

What President Biden’s New Security Policy Means for the Future of Cyber

The Biden Administration just cemented its legacy in cybersecurity policy with a sweeping Executive Order aimed at improving the resilience and reducing the risk of the United States Government.

Cyber Resilience, CISA’s Strategic Plan, and Zero Trust Segmentation Proof
Cyber Resilience

Cyber Resilience, CISA’s Strategic Plan, and Zero Trust Segmentation Proof

For CISA, Zero Trust security is key to achieving its cybersecurity strategies and meeting its goals for Cyber Resilience.

Exploring the Use of NGFW Functionality in a Microsegmentation Environment
Cyber Resilience

Exploring the Use of NGFW Functionality in a Microsegmentation Environment

Learn more about Illumio's research on the possibilities of implementing NGFW features in a microsegmentation environment.

No items found.

Assume Breach.
Minimize Impact.
Increase Resilience.

Ready to learn more about Zero Trust Segmentation?