Adaptive Segmentationmicro-segmentation August 19, 2020

Operationalizing Zero Trust – Step 5: Design the Policy

Raghu Nandakumara, Field CTO

This blog series expands on ideas introduced in my March post, “Zero Trust is not hard … If you’re pragmatic.”

In that post, I outlined six steps to achieve Zero Trust, and here I'd like to expand on one of those steps, namely designing the policy. I will show you how this step can support the implementation of a solid framework that can be used by any micro-segmentation practitioner to make their projects more successful, irrespective of the organization's size.

Before I begin, here’s a refresher on the six steps:

Zero Trust diagram

Step 5: Design the Policy

In the last post from this series, I looked at “prescribing what data is needed.” Within this article, I made the following point:

“One of the most important aspects of Zero Trust – and it doesn’t get nearly as much coverage as it should – is that effective implementation of Zero Trust relies on access to context information, or metadata, to help formulate policy. So, when talking about micro-segmentation in the context of protecting workloads, the minimum metadata outside a standard traffic report you need describes workloads in the context of your data centre applications and environments.”

Based on this statement, the three key bits of data we need are:

  1. Real time traffic events for the workloads we want to protect.
  2. Context data for each workload and connection – this includes metadata associated with the workload that would come from a system of record, such as a CMDB, and also information such as details of the communicating process that is sourced directly from the workload. '
  3. An application dependency map (derived from items 1 and 2) that allows an application owner or segmentation practitioner to quickly visualise a specific application’s upstream and downstream dependencies.
Putting it all together

So now you’re almost ready to build that policy, but let me remind you of the objectives:

  • You want to build a micro-segmentation policy to protect your workloads.
  • You want this policy to follow the principles of Zero Trust.
  • Hence, the rules you construct must allow only the access into and out of the workloads that are needed to perform their business function.

Following on from the data I said was “necessary,” below you’ll see examples of a few traffic log entries that can be used to build a policy:

Traffic Log Connection 1:

  • Source: 10.0.0.1, 10.0.0.2
  • Source Context: Web Server, Payments Application, Production, UK
  • Destination: 192.168.0.1
  • Destination: Context: DNS Responder, DNS Infrastructure, Production, UK o Destination Process: named
  • Port: 53
  • Protocol: UDP
  • Action: Allow

Traffic Log Connection 2:

  • Source: 10.0.0.1,10.0.0.2
  • Source Context: Web Server, Payments Application, Production, UK
  • Destination: 10.0.1.5,10.0.1.6,10.0.1.7
  • Destination Context: App Server, Payments Application, Production, UK
  • Destination Process: tomcat
  • Port: 8080
  • Protocol: TCP
  • Action: Allow

Traffic Log Connection 3:

  • Source: 10.0.1.5, 10.0.1.6,10.0.1.7
  • Source Context: App Server, Payments Application, Production, UK
  • Destination: 192.168.0.1
  • Destination: Context: DNS Responder, DNS Infrastructure, Production, UK
  • Destination Process: named
  • Port: 53
  • Protocol: UDP
  • Action: Allow

Traffic Log Connection 4:

  • Source: 10.1.0.1,10.1.0.2
  • Source Context: Web Server, Payments Application, Production, Germany
  • Destination: 10.0.1.5,10.0.1.6,10.0.1.7
  • Destination Context: App Server, Payments Application, Production, UK
  • Destination Process: httpd
  • Port: 80
  • Protocol: TCP
  • Action: Allow

Traffic Log Connection 5:

  • Source: 10.1.2.1,10.1.2.2
  • Source Context: Database Server, Payments Application, Production, Germany
  • Destination: 10.0.1.5,10.0.1.6,10.0.1.7
  • Destination Context: App Server, Payments Application, Production, UK
  • Destination Process: httpd
  • Port: 80
  • Protocol: TCP
  • Action: Allow

Using this, you are able to quickly derive the application dependency map.

 

ZTimage1

So far, so good.

Now, you can look at your application dependency map to determine which flows you actually want to permit. Based on knowledge of your application, you know that the following are required flows, for example:

  1. Web Server, Payments, Production, UK -> DNS Responder, DNS Infrastructure, Production, UK on 53/udp
  2. App Server, Payments, Production, UK -> DNS Responder, DNS Infrastructure, Production, UK on 53/udp
  3. Web Server, Payments, Production, UK -> App Server, Payments, Production, UK on 8080/tcp

You also know that the following two flows don’t look right and, therefore, shouldn’t be included in your initial rules:

  1. Web Server, Payments, Production, Germany -> App Server, Payments, Production, UK on 80/tcp
  2. DB Server, Payments, Production, Germany -> App Server, Payments, Production, UK on 80/tcp

The application dependency map that you will use to build rules from will end up looking like this:

ztimage2

Now, how do you actually express these rules? With traditional firewalls, you would be forced to define these using source and destination IP addresses. But going about it this way completely removes all the rich context information you’ve benefited from when discovering these flows and, worse still, means that this context must be re-inserted when the rule comes under review. Also, what happens when you add an additional DNS Responder or a new App Server or Web Server for the Payments App?

Let’s keep in mind that you are trying to build a policy that adheres to Zero Trust principles, namely ensuring that it is always least privilege. A context-based approach, with an adaptive security engine working its magic in the background, facilitates exactly this. So, just as your policy expands to incorporate a new server with existing context, you’ll also want your policy to shrink when you decommission a server. If you retire one of your DNS Responders, for example, you’ll want all rules that previously allowed access to / from it to be updated so that this access is no longer possible. This is exactly what Illumio’s Policy Compute Engine (PCE) is meant to do – micro-segmentation policy is defined using metadata, and the PCE then determines what workloads match the metadata at that particular time to then compute the actual rules that need to be enforced at each workload to maintain their Zero Trust security posture. Every time there is a change in context, the PCE adapts the policy and notifies workloads of updates.

With this in mind, your Zero Trust policy boils down to the following rules:

Rule 1:

  • Source: Web Server, Payments, Production, UK
  • Destination: DNS Responder, DNS Infrastructure, Production, UK
  • Destination Service: 53/udp
  • Destination Process: named

Rule 2:

  • Source: App Server, Payments, Production, UK
  • Destination: DNS Responder, DNS Infrastructure, Production, UK
  • Destination Service: 53/udp
  • Destination Process: named

Rule 3:

  • Source: Web Server, Payments, Production, UK
  • Destination: App Server, Payments, Production, UK
  • Destination Service: 8080/tcp
  • Destination Process: tomcat

And that’s it.

Ready to take the next step on your Zero Trust journey? Visit our page on how to operationalize your Zero Trust strategy with micro-segmentation to get the inside scoop.

Adaptive Segmentationmicro-segmentation
Share this post: