Adaptive Segmentationmicro-segmentation July 15, 2020

Operationalizing Zero Trust – Step 4: Prescribe What Data is Needed

Raghu Nandakumara, Field CTO

This blog series expands on ideas introduced in my March post, “Zero Trust is not hard … If you’re pragmatic.”

In that post, I outlined six steps to achieve Zero Trust, and here I'd like to expand on one of those steps, namely prescribing what data is needed. I will show you how this step can support the implementation of a solid framework that can be used by any micro-segmentation practitioner to make their projects more successful, irrespective of the organization's size.

Before I begin, here’s a refresher on the six steps:


Step 4: Prescribe what data is needed

In the last post from this series, I looked at “Determine which Zero Trust pillar to focus on” and “Specify the exact control.” Evaluating those steps resulted in the following inputs to help you move forward in operationalising Zero Trust:

  • Determine which Zero Trust pillar to focus on: Workload security and visibility because the Zero Trust Maturity Assessment you performed identified these as the pillars with the most significant gaps.
  • Specify the exact control: Since the assessment identified excessive network access as the most significant security gap, the control you will focus on is micro-segmentation.

Once you have zeroed in on exactly what you want to improve protection for and what controls you want to leverage, you can begin piecing together the information needed in order to implement such controls effectively.

Let’s start with the desired end state:

  • You want to build a micro-segmentation policy to protect your workloads.
  • You want this policy to follow the principles of Zero Trust.
  • Hence, the rules you construct must allow only the access into and out of your workloads that are needed to perform their business function.

So, what do you need for this? Depending on whether you have pre-existing knowledge about necessary flows, or whether you are truly starting from scratch on a brownfield environment, which has already been operating for many years, you may have two slightly different answers for this.

  • If you have pre-existing knowledge: Specify a segmentation rule based on Source IP, Destination IP, Port, Protocol
  • If you are in a brownfield environment: Get traffic logs to help you identify flows that may be relevant

Have you ever spent many hours and days looking at traffic logs from firewalls trying to figure out what a particular connection is doing? And have you been forced to hunt around for information or people who can provide valuable context to a flow so that its purpose can be properly understood? Have you repeated this for the next line in the logs, and the next, and the next…? Now picture having to do this for all the applications in scope for segmentation – not my idea of fun. Sounds like playing ‘find the needle in the haystack’ repeatedly.

Now, imagine an alternate universe where all this great traffic data suddenly provides more than just the standard 5 tuples of information. What if, instead, you could gather the context of a connection straight away, without having to do this hunting, so that you can understand traffic event context just by looking at it? It’s like going from a black-and-white movie with no audio to something in 4K with Dolby Atmos sound.

To put this into context, let’s use an example.

Usual Traffic Log:

  • Source:
  • Destination:
  • Port: 53
  • Protocol: UDP
  • Action: Allow

Traffic Log with Context:

  • Source:
  • Source Context: Web Server, Payments Application, Production, UK
  • Destination:
  • Destination: Context: DNS Responder, DNS Infrastructure, Production, UK
  • Destination Process: named
  • Port: 53
  • Protocol: UDP
  • Action: Allow

As an application owner or security operations team member, one version of the event is clearly superior. The version with context provides a complete picture of the flow: you can see that the Production Payments Web Server has a dependency on the Production DNS Responder, which has the named process receiving connections on 53/udp. As a reviewer, you can quickly decide if it's a flow you are interested in, if it's normal traffic or whether it warrants further investigation. You can classify it easily (or even build some tooling to classify it automatically), and you can only do this because of the additional context you have.

One of the most important aspects of Zero Trust – and it doesn’t get nearly as much coverage as it should – is that effective implementation of Zero Trust relies on access to context information, or metadata, to help formulate policy. So, when talking about micro-segmentation in the context of protecting workloads, the minimum metadata outside a standard traffic report you need describes workloads in the context of your data centre applications and environments.

The Illumio Adaptive Security Platform uses this metadata harvested from an organisation’s CMDB or other golden (or authoritative) source, to populate the labels associated with a workload. These labels associate a Role, Application, Environment and Location with each workload and help us build a rich application dependency map that clearly identifies upstream and downstream dependencies for each application. And this puts us in a great position to review flows and design policy.

In my next post, I’ll discuss how to design a Zero Trust policy.

Ready to take the next step on your Zero Trust journey? Visit our page on how to operationalize your Zero Trust strategy with micro-segmentation to get the inside scoop.

Adaptive Segmentationmicro-segmentation
Share this post: