Best Practices for Workload Segmentation: Lean and Streamlined or Heavy and Complex?
‘Cybersecurity’ covers a wide range of topics, some of which include network priorities, host priorities, authentication, identity, automation, compliance, and beyond. Micro-segmentation is one piece of this broader principle, but many still believe it is a challenge to implement at scale.
All modern workloads contain an integrated firewall and port filter, such as iptables in Linux. Configuring this on individual workloads is relatively straightforward, but doing so at large scales is often a challenge. And doing so in the evolving migration from monolithic workloads to microservices makes that challenge even more difficult to operationalize. The result is that segmentation at the workload layer of the architecture is frequently not done, instead simply leaving this task to the underlying network fabric to implement. This approach creates a conflict in priorities since network segmentation is often implemented for different reasons than what is required from workload segmentation.
Two key requirements for segmentation at the workload layer in both hybrid cloud and microservices architectures are automation and supporting large-scale architectures. Automation means removing the human element as much as possible, since the more a human is involved in the operational process, the more likely misconfigurations and errors can occur. And supporting large-scale architectures means enabling segmentation automation in such a way that prevents roadblocks in the overall architecture after some certain scale is reached.
These requirements can be implemented in one of two ways: a “heavy” approach or a “lightweight” approach. Let’s compare these two approaches to see which makes the most sense for you and your organization.
A tale of two approaches to microsegmentation
Let’s first discuss what we consider to be a “heavy” approach. Heavier approaches, like Cisco Tetration for example, are focused on capturing every packet from every workload, performing network analytics across the network fabric, and requiring a significant amount of human intervention in implementing segmentation policy at scale. Operationalizing such tools is quite complex, and, therefore, can be described as a “heavy” approach to implementing micro-segmentation.
In contrast, “lightweight” approaches, like Illumio, were purpose-built for micro-segmentation. They were not born as one product and later modified into a different product. They focus exclusively on segmentation at the workload layer and are deliberately agnostic to how the underlying network is implemented, leaving the task of network analytics to networking tools. With this approach, implementing segmentation policy is streamlined, with an emphasis on automation, requiring little, if any, human intervention.
When will segmentation break my architecture?
It is a truism, but it bears repeating: the more complex the solution, to sooner it will reach an operational upper limit. At some point, a complex solution will incur an eventual roadblock to how far the overall workload segmentation architecture can scale.
Heavier, complex solutions for micro-segmentation will work on smaller use cases, but as those use cases grow, they will eventually weigh down operational overhead and reach a hard limit. At that point, additional workloads are often left unprotected, and automation begins to break down. As the solution performs more tasks, the complexity eventually becomes an operational burden.
Micro-segmentation vendors frequently publish numbers that reflect their upper limit of managed workloads, and these numbers appear to be large enough to accommodate expected growth. But as more organizations adopt hybrid cloud architectures, virtualization creates many more workloads than is the case with traditional bare-metal hosts. And microservices architectures create a significant additional number of IP-addressable entities within a host, causing the total number of workloads to increase quickly. The number of managed workloads should not be determined by the tools used to operationalize segmentation. In order to guarantee an uninterrupted workload growth cycle, never assume that a smaller number will be sufficient.
In order to automate micro-segmentation in any evolving hybrid cloud architecture, the solution should not break down after some upper number is reached.
Automation ≠ Humans
Automation requires a lean, streamlined architecture. A large percentage of security breaches in any cloud environment are due to honest mistakes by administrators. As workload lifecycles become more dynamic, and where they reside on network segments becomes increasingly ephemeral, it is more critical to automate security processes and remove the significant risk introduced by human intervention in the operational process.
Illumio, as an example of a lightweight approach, uses the concept of four-dimensional labels and application ringfencing to simplify and automate the process of applying segmentation to a workload at the moment of its inception. It allows for suggesting segmentation boundaries automatically, or allowing the administrator to define these boundaries. This significantly reduces the risk that human intervention may inadvertently introduce an error.
Most heavy solutions, like Tetration, include options for applying tags to workloads in order to track them independently of IP addressing. That said, the process is “heavy,” complex, and requires a significant amount of initial human interaction and expertise to operationalize. And as you can guess, the more a process requires human intervention and expertise, the greater the risk for unintentional error.
When planning for automating workload segmentation, keep this rule in mind: the more complex the process, the higher the risk.
Introduce microservices, expect more workloads
The migration of application development from monolithic workloads to microservices introduces the effect of significantly increasing the number of workloads that need to be managed. With the advent of virtualization, a single bare-metal host could manage many VMs, each with its own IP address. And now, with the advent of microservices, each of these VMs can host many containerized constructs, resulting in even more IP addresses.
If every entity on a network with an IP address is defined as a workload, a microservices environment can make the number of workloads explode. The sheer number of workloads that need to be monitored requires a solution that can scale to very large numbers.
Visualization is paramount
Monitoring a workload means two basic things: enforcing policy and visualization. But how do you visualize what applications are doing to each other across a large number of workloads? Visualizing workload communication cannot be dependent on network segmentation. In the case of microservices, the need for visualization extends beyond VM-to-VM traffic and needs to include communication between pods, nodes, and services when using Kubernetes or OpenShift to orchestrate container lifecycles.
Heavy solutions, like Tetration, may enforce policy within a containerized environment, but visualizing application traffic within these constructs is limited. These solutions can often create a visual map of traffic between hosts, but the view stops there, and traffic between containerized constructs within a host is missing. On the other hand, a lightweight solution extends visibility all the way from bare-metal hosts into VMs and all containerized constructs within any host. All workloads, whether monolithic or containerized, are fully visible when Illumio, for example, builds its application dependency map.
Visualizing all application traffic and behavior, regardless of how it is hosted, is essential as your workloads evolve across hybrid cloud and different compute resources, across both on-premises data centers and public cloud fabrics. Visualization becomes more important as these details become more complex and dynamic, in order to create and enforce human-readable, declarative policy.
The bottom line
When deciding how to implement workload segmentation at scale and in an automated fashion, a lean and streamlined approach is the only viable option. Similar to how deploying a speedboat is sometimes a better choice than deploying a battleship if the goal is speed and agility on the water, the same applies to how micro-segmentation is deployed across modern hybrid clouds and compute fabrics. Reduce complexity and keep the solution “lightweight,” not “heavy”.