Ultimately, Zero Trust Segmentation is about making and enforcing security rules.
By establishing carefully defined access policies, Zero Trust Segmentation prevents breaches from spreading across IT systems and environments.
In any organization, it’s inevitable that at least one endpoint device is going to be breached by attackers. But if the organization has Zero Trust Segmentation security in place, the breach can be confined to that initial endpoint, regardless of whether that endpoint is a laptop, desktop, server, or even a virtual machine.
Segmentation policies will prevent malware from accessing the network ports and protocols it needs to copy itself to other mission-critical servers and data centers or exploring the network in search of valuable data. Segmentation traps the attack in place, like a fly under a glass.
Zero Trust Segmentation: Two approaches
A rules engine is software that defines the syntax for segmentation rules. It also enforces those rules once they are defined. Generally, segmentation software vendors take two different approaches when designing rules engines for customers.
The first approach is to offer customers maximum flexibility, allowing stakeholders across the organization to define rules with whatever categories or labels they like.
The other approach is to adopt a design philosophy known as structured policy control. This approach limits the number of labels available for segmentation rules. It also puts rulemaking under the control of a centralized team of IT security experts. Vendors adopting this approach believe that, in the end, simplicity will be more effective at curtailing attacks than open-ended complexity.
We'll now examine these approaches and compare their advantages and challenges for real-life deployments.
The maximum flexibility approach to segmentation
In any organization, security requirements will vary from department to department and from use case to use case. Different applications will require different rules. Even the same application might have different rules depending on which data center it’s running in, which version of software it’s running, which resources it’s relying on, and so on.
Many segmentation vendors address this need for flexibility by allowing different users and application owners to set their own rules for their area of expertise or responsibility.
Typically, these vendors support three types of rules:
- “Block” rules for preventing traffic movement along certain pathways
- “Allow” rules for providing permission for certain types of traffic to travel a pathway
- “Override” rules for preempting other rules to either block or allow traffic in particular situations
At a high level, this distributed approach to flexible rulemaking sounds promising. After all, IT asset owners should have the domain knowledge they need for setting the segmentation rules best suited for a particular IT asset or group of related assets. And providing three types of rules — block, allow, and override — seems to offer IT security teams and business stakeholders the precision needed to define just the right security policies for protecting IT assets.
Unfortunately, in most organizations — especially large organizations with tens or hundreds of thousands of workloads distributed across cloud, on-premises, and endpoint environments — what soon results from this approach is “the Wild West.”
Sure, stakeholders across the organization have defined rules to protect valuable IT assets. But this ultimately leads to chaos because there are too many rules of too many different sorts to manage effectively.
Because rulemaking is distributed and uncoordinated, the collection of rules end up having conflicts and omissions, creating opportunities for attackers to slip through. Central, consistent governance is virtually impossible.
Here’s what creates these “Wild West” conditions:
- Domains of responsibility often overlap.
For example, one person might be in charge of a business application, and another person might be in charge of a database. The application might rely on the database, but the access rules defined for the application and the database are developed independently. As a result, the two sets of rules are liable to be inconsistent, especially if other applications also use the database.
Another example: One person is in charge of the company’s customer relationship management (CRM) application. Another person is in charge of the company’s New York data center, where the CRM application happens to be running. Even if these two people agree on security philosophies, it’s highly unlikely that their independent implementations of segmentation rules will work flawlessly together. There’s just too much complexity involving IP addresses, ports and protocols to author tens or hundreds of rules independently and effectively.
- Segmentation control is distributed rather than centralized, so testing rarely occurs.
Because control is distributed across so many stakeholders, it’s difficult for the IT organization to test all segmentation rules before activating them. Lack of testing increases the risk of errors and oversights. It may even lead to business-critical traffic inadvertently being blocked.
- Support for unlimited or high numbers of labels leads to confusion.
Segmentation products that distribute control this way usually allow customers to define their own categories or labels for segmentation.
Taking advantage of this flexibility, customers soon have twenty, thirty, or more labels for their segmentation policies. For example, a customer might label all IT assets involved in PCI compliance with a “PCI Compliance” label. They might also label all assets in a specific location with the location name or have labels for business units, applications, environments (e.g., test vs. production), additional governmental regulations (such as GDPR), and so on.
In theory, this proliferation of labels provides precision and visibility. In practice, it leads to security models that are too complex to manage effectively.
Teams can try to reduce this chaos through rule ordering — for example, structuring rule sets to enforce the data center’s rules first, enforce the application’s rules next, and enforce rules about regulations or business units last. In practice, though, this sort of structuring leads to labyrinthine policies, making it very difficult to tell which rules are in force in certain conditions.
- Decentralized rules are harder to manage when employees change jobs.
Another problem with distributed rulemaking is that it makes it harder for an organization’s IT and security teams to keep track of what rules have been created and why.
A rule author, such as an application owner, might not have documented the thinking that went into the segmentation rules. If that employee leaves the company, vital security and operational knowledge is lost.
The biggest problem with this highly flexible approach? It leaves gaps for attackers to exploit. Ransomware still gets in, despite all the time, money and effort that an organization has invested in segmenting its networks.
The structured policy control approach to segmentation
In contrast to the "maximum flexibility" approach to creating segmentation rules, a segmentation vendor might restrict the number of labels that can be created.
The number of permitted labels might be as low as four, covering just roles, applications, environments and locations. Or the number might be a little higher, but nowhere near as high as the number permitted in the maximum flexibility approach discussed above.
It turns out that restricting labels works well in practice. In fact, the largest successful deployments of Zero Trust Segmentation all take this approach, limiting labels to ten or fewer, even though these policies are protecting very complex, hybrid IT environments.
Here’s why the structured policy control approach works so well:
- Limited labels force centralization and coordination upfront.
To make segmentation policy management work well, an organization needs a central team to analyze network traffic and jointly develop the segmentation policies to be enforced.
Application owners coordinate with database owners, who in turn coordinate with firewall managers. Because they’re working from a shared analysis and understanding of authorized traffic patterns, they can define coherent, mutually consistent segmentation rules that provide the security they need.
- It builds on the comprehensive network visibility that structured policy control can provide.
To coordinate rulemaking across different applications, databases and other resources, IT and security teams need comprehensive visibility into their organization’s network traffic. That way, they can identify the legitimate traffic that their applications and services rely on.
Once that essential traffic is identified, it becomes easier to write policies that block everything else. It also becomes easier for various stakeholders to agree on what traffic should be permitted. When stakeholders can work from a shared understanding of network usage, collaboration becomes straightforward.
- It provides further simplicity by denying all traffic by default for Zero Trust security.
Instead of giving stakeholders options to block traffic, allow traffic, or override preceding segmentation rules, a structured policy approach can begin by blocking all traffic by default for any system or environment. Then, working from a visualization map showing all legitimate traffic across the organization, IT and security teams can write policies to allow just the traffic that the application, database, or service requires.
By trusting nothing by default, structured policy control vendors provide the rigorous Zero Trust security recommended by the National Institute of Standards and Technologies (NIST), the U.S. White House in its Executive Order on Improving the Nation’s Cybersecurity, and other IT security authorities.
- Because rulemaking is centralized, IT and security teams can test and map rules before enforcing them.
Another advantage of a centralized approach to rulemaking is that, because there is one coherent model for rules, the entire model can run in test mode. Also, communication pathways between workloads can be visually mapped before the rules are implemented, alerting teams to potential issues. This gives IT and security teams a chance to troubleshoot and fine-tune rules before enforcing them.
Conclusion: Flexibility vs. Scalability
The choice between these two approaches becomes stark when you are implementing rules at scale.
Even in smaller organizations, the highly flexible approach quickly becomes untenable. Security gaps inevitably persist amid a thicket of independently developed rulesets. Attacks get through or a well-intentioned security rule accidentally blocks mission-critical traffic because there is no unified coordination.
In contrast, by enforcing a design discipline up front, the structured policy control approach to segmentation helps IT and security teams simply and efficiently protect any kind of IT environment from start-ups to the largest, most complex global networks.
To learn about the Illumio solution for Zero Trust Segmentation:
- Get the Zero Trust Impact Report to learn ESG research on how organizations are approaching Zero Trust.
- Download our in-depth guide How to Build a Micro-Segmentation Strategy in 5 Steps.
- Get a free copy of the Forrester New Wave: Microsegmentation, Q1 2022 in which Illumio is named a Leader.
- Schedule a no-obligation demo and consultation with our Zero Trust Segmentation experts.