I spend a lot of time discussing with organizations the world over how to achieve better breach protection with security segmentation that delivers simplicity and efficiency for networking and security operations teams.
Invariably, however, our conversations turn to the pain of how we have segmented the data center to date. As we well know, it is often done with the devil we know: large data center firewalls, which present huge challenges and expense when trying to use them for anything more than basic zoning.
Until recently, we had no choice but to put up with the pain these expensive devices thrust on us. Firewalls, let’s not forget, were designed decades ago for very coarse-grained zoning and protection at the network and data center edge, not for segmentation inside the data center.
I wanted to share some of the challenges that we help organizations put behind them as we strategize on protecting their data and assets.
Undermined by complexity
We’re accustomed to working with IP addresses since that is how networking has always operated. However, if we reflect soberly on them, we realize they are complicated. We don’t communicate or think in them (nightmares don’t count), and by themselves, give no real context about the host.
Given the complexity of creating and managing firewall rulesets based on IP addresses, we have to wonder why greater simplicity has not been brought to bear.
For this reason, we’ve spent countless hours thinking about and perfecting policy abstractions that actually scale, such as human-understandable labels and tags: a web server is labeled as a “Web Server," not 192.168.155.99. And pure whitelist models that provide the same power as those with mixed allow and deny statements, but don’t suffer from rule ordering complexity.
This frees us to create easy-to-understand segmentation policies with clear labels, keeping the focus on better segmentation outcomes, not untangling IP address and rule bloat and complexity.
When more is more
The point of segmentation is to protect valuable assets and data in an enterprise from illicit access – be it an insider or from attacker lateral movement.
This being the case, in conversations I have, there always tends to be a natural inclination to protect more by making segmentation finer-grained.
The reality is that when we resort to attempting this finer-grained segmentation (or micro-segmentation) with firewalls, it quickly becomes too much to stay on top of. This is because segmentation boundaries become more granular as they move close to servers and workloads. What’s more, server/endpoint disaggregation is now the norm, with microservices models breaking applications into many individual workloads. Firewall rules for segmentation must protect each workload. Either way, the number of firewall rules increases exponentially as a function of workloads – whether teams are ready to manage them or not.
It isn’t uncommon for organizations to manage tens of thousands of firewall rules for segmentation.
A large customer was forced to manage some 500,000 IP-address based firewall rules prior to Illumio.
We actually did the math on the complexity of scaling segmentation, captured in Kirner’s equation here.
Explained simply, some 800 workloads (servers, containers, or VMs) will have 25,000 unique firewall rules. If a company had 2,500 workloads, the number mushrooms to 127,000 rules to manage. At 10,000 workloads, it explodes to 1.1 million rules.
Let that sink in, as it’s the inevitable direction you are headed. Data center firewalls would force a team to stay on top of more than one million rules for 10,000 workloads.
While few organizations have a million rules in place, I never speak to organizations who don’t find managing burgeoning and complex segmentation rulesets an enormous challenge fraught with tremendous security risk.
This is not just more to manage but also more for you to audit, and more for your auditors to likely find fault with too.
In sum: more, finer-grained segments amount to firewalls with more rules, requiring more people to manage it – all equating to more risk.
More is more. And not in a good way.
The Oops factor
We can’t forget human error. Invariably teams will make mistakes in creating firewall rules. Industry averages tell us the number of bugs per 1,000 lines of code is between 0.5 and 25.1
We don’t need to be reminded that these mistakes create misconfigurations that break the applications organizations are trying to protect.
Also, when teams inherit these bloated rulesets, seldom is there appetite to clean up and remove rules for fear of an unintended consequence – breaking an application or leaving something wide open for attackers to utilize.
We most often think about firewalls for segmentation, but at times we see organizations rely on their switching to implement segmentation – through extensive access control lists (ACLs).
A Fortune 300 SaaS pioneer faced a crisis when switches programmed with Access Control Lists (ACLs) ran out of TCAM memory. Read their story.
As these ACLs or rulesets grow, I often hear of concerns related to TCAM exhaustion. TCAM (or Ternary Content Addressable Memory) lets networking gear do efficient address lookups for ultra-fast packet-filtering and routing. However, like all types of memory, TCAM has limits, and as ACLs are added to a device, the TCAM limits are quickly reached. Once they are hit, performance slows and the network becomes a bottleneck.
Either stop provisioning new servers or workloads or buy larger networking switches with more TCAM. You can probably see why I placed solution in quotes.
We’ll leave it here for now, but I intend to share other common segmentation challenges that are created using status quo firewalls and infrastructure in the coming weeks, so stay tuned.
1 Andrew Habib and Michael Pradel. 2018. How Many of All Bugs Do We Find? A Study of Static Bug Detectors. In Proceedings of the 2018 33rd ACM/IEEE