/
Zero Trust Segmentation

How to Design and Implement an Effective Container Microsegmentation Strategy with Kubernetes

Microsegmentation is often viewed as challenging to implement at scale. If your goal is to create a segment – a trust boundary – around every single workload in your entire cloud fabric, several factors must be considered during the architecture phase. Hosts that are deployed as bare-metal or VMs are familiar entities, and their behavior is well-known from both a networking and security perspective. But when you include container environments in the overall architecture, you’ll likely introduce some caveats that are not normally significant in traditional network and security architectures.

When you deploy containers into your overall hybrid cloud, several questions will eventually emerge around security:

  • How do I automate the deployment and management of microsegmentation across all containers workloads?
  • How do I include container segmentation policy and automation into the existing security tools used to manage bare-metal and VM hosts?
  • Will I need to manage two distinct microsegmentation solutions: one for containers and another for everything else?

Containers can behave strangely, from a network and security perspective. For example, pods can suddenly die and later be spun back up automatically, but with a different IP address. On the other hand, services are deployed in front of pods, and act like a load balancer. So, which of these entities should I define a segment for? A namespace can span across these entities, so how do I segment that? And how many workloads will I end up creating when everything is fully deployed?

Containers can be a difficult topic to understand on their own and trying to solve the microsegmentation “problem” with containers can easily complicate the matter even more.

How can you solve the microsegmentation challenge so that you can introduce containers into your existing environment without breaking current security strategy or accidentally incurring some roadblock as the architecture evolves?

Luckily, this is a solvable issue. Let me explain.

Considerations when adding containers to an existing microsegmentation strategy

A good place to start the conversation around containers and microsegmentation is by addressing scale. When designing a segmentation strategy for all of your workloads across your entire hybrid cloud, scaling is always an important caveat. How big will the overall environment grow?

Generally, the answer to this question is to add up all of your hosts – bare-metal and VMs – and then perhaps double or triple that number to accommodate expected future growth. This number will be a bit fuzzy since some applications run on a cluster of hosts or VMs; one host doesn’t always equal one workload. But equating a workload with a host is a useful benchmark to estimate scaling numbers against. That final total number is then compared with the upper limits of managed workloads that a specific microsegmentation vendor can support.

Bare-metal hosts don’t migrate often, so they are pretty static entities to define segments around. VMs, on the other hand, can be a bit unpredictable. For example, they can be dynamically spun up and down, migrated across network segments, and assigned multiple IP addresses across their lifecycles. So the total number of hosts will be a bit fluid. That said, you can usually estimate how many VMs are expected to be active in your cloud in order to reach the total number of workloads that need to be managed and segmented. Often this final number will be in the hundreds or perhaps low thousands.  

Therefore, when considering the upper scale limits that different microsegmentation vendors can support, these maximum numbers will often seem “good enough.” For example, if a cloud has 1,000 workloads running today and this number may double or even triple over the next few years, there should be little concern about hitting a specific vendor’s upper limit of 20,000 managed workloads anytime soon. Big numbers are seen as a remote concern.

But what happens when you add containers to the picture? A containerized workload is a compute instance that behaves differently from VMs and bare-metal hosts.

For example, Kubernetes calls the underlying host, either VM or bare-metal, running containers a “node.” On each node, one or more “pods” are created, and it is within each pod that the actual container runtime instances are running. Kubernetes recommends a maximum of 110 pods be deployed on a given node.

Therefore, if you have 100 nodes in your cloud running Kubernetes, and each node is running 110 pods, you can end up with 11,000 possible compute instances that need to somehow be defined as distinct segments. If you have 200 nodes, you can end up with 22,000 possible compute instances. That bears repeating: only 200 nodes in your containers environment can result in 22,000 possible workload segments.

And this is just in your containers environment. You will need to add all of the non-containerized workloads across your entire hybrid cloud in order to estimate the expected number of managed workloads and possible segments. The lesson learned is that the maximum number of managed workloads, which different microsegmentation vendors can support, no longer seems so unattainable.

One solution for both containers and non-containers

When considering how to segment a container environment, there are several vendors that enable microsegmentation within and between clusters in either Kubernetes or OpenShift. However, most of these solutions focus specifically on container environments and not on non-containerized workloads across your hybrid cloud. And the reality is, most networks that have containers workloads also have non-containerized workloads, bare-metal and VMs, all co-existing in the same cloud fabric.

If you choose to deploy one segmentation solution for containers and a different segmentation solution for bare-metal and VMs, the result will be two distinct toolsets that don’t automate or correlate events between them. This approach may work on small scales but will become difficult to operationalize and manage as the deployment grows. You should avoid this siloed approach to workload segmentation. Containerized workloads need to be managed in the same way across the entire compute fabric in order to create a unified solution for deploying and managing all workload segmentation.

Illumio, for example, works across all workloads, from bare-metal to VMs to containers. There is no feature disparity between containerized workloads and non-containerized workloads, so you get microsegmentation with visualization, automation, and policy management for all workloads.

Namespaces, pods, or services?

Kubernetes defines three main containers entities in which egress and ingress network traffic can be controlled: a pod, service, or a namespace. (Note: nodes are not considered as a destination between these entities, and a cluster is defined as the broadest boundary around a collection of nodes). In addition, there is often a load balancer deployed at the cluster perimeter, resulting in four possible entities that can be segmented. When defining your microsegmentation architecture, which of these entities should be classified as a segment? Some of them or all of them?

A pod is the smallest entity that can be assigned an IP address by Kubernetes. Containers runtime instances will be running in one or more pods, and often these pods will need to communicate with each other. Each pod can be defined as a segment, but the challenge is that Kubernetes can spin down and later spin up pods which, from a networking perspective, means that destination or source IP addresses can suddenly disappear. Network and security teams don’t like it when entities suddenly vanish in the overall fabric, making it challenging to deal with route convergence and security tools.

Kubernetes can deploy a service, which is deployed in front of a given number of pods, acting almost like a load balancer for the pods behind it. Services are much more stable, and while Kubernetes can dynamically spin up and spin down pods, it will rarely do so with services. Therefore, it is best practice to define a service as a segment, and not individual pods.

It’s important that you ask your microsegmentation vendor whether it can define either a pod or a service as a segment, allowing the choice to be left to your security administrator.

Applications deployed in containers will generally be deployed as a namespace, with code essentially running in a distributed fashion within one or more pods. A containers namespace is an abstraction across multiple pods and services.

Illumio, for example, enables you to define a “profile” against a namespace, and then define this profile as a segment. The result is that Illumio enables the definition of a segment to be either against a pod, or a service, or a namespace. And, unlike microsegmentation tools designed specifically for containerized environments, Illumio can also define segments against the underlying host, the ingress/egress points at the cluster boundary, and the surrounding legacy workloads which need to access resources within containers. Segments don’t only exist within containers – they exist across the entire cloud fabric.

This is why you should ensure your microsegmentation vendor can manage over 100,000 workloads. The more container environments are deployed in a cloud fabric, the faster these high numbers come into focus. And these numbers consist of workloads which, in containers, are often ephemeral, with workloads coming to life and disappearing dynamically. This means that your microsegmentation solution needs to respond to changes in real time.

By using Illumio’s Kubelink instance deployed into a containers cluster, you can dynamically discover workloads that are deployed and decommissioned, enable our application dependency map, and enforce tools to react in real-time to any and all changes in workloads being managed. Automation and orchestration are two important concepts in containers, and Illumio implements both for operationalizing microsegmentation management both within and outside of container environment.

Deploying containers in your cloud should not mean sacrificing the ability to define segments around workloads, regardless of how they are deployed. Make sure your segmentation solution will continue to scale to the same high numbers as containerized workloads enable, without incurring any roadblocks. With Illumio Core, your company can reach Zero Trust around every single workload in your entire cloud fabric — regardless of scale.

Want more? Read how Illumio Core can secure Kubernetes and OpenShift.

Contact us today to learn how to secure your containers with Illumio Zero Trust Segmentation.

Related topics

No items found.

Related articles

An Architect's Guide to Deploying Microsegmentation: Implications of Altering the Security Model
Zero Trust Segmentation

An Architect's Guide to Deploying Microsegmentation: Implications of Altering the Security Model

How will micro-segmentation deployment impact your business? Learn about the implications of altering your security model.

What Makes Illumio's Agent More Reliable Than Inline Agents
Zero Trust Segmentation

What Makes Illumio's Agent More Reliable Than Inline Agents

Focusing on risk reduction goals and taking a hands-off approach to packets, Illumio lets you think about security without worrying about a reliable agent.

3 Highlights from Illumio at RSA Conference 2023
Zero Trust Segmentation

3 Highlights from Illumio at RSA Conference 2023

Read these three exciting highlights about Illumio at this year's RSAC.

No items found.

Assume Breach.
Minimize Impact.
Increase Resilience.

Ready to learn more about Zero Trust Segmentation?