Adaptive Segmentationmicro-segmentation June 21, 2020

Whitelist vs. Blacklist


One of the innate characteristics of carbon-based bags of water is the need to organize our surroundings. If we really want to understand anything, we need to look at how it’s organized first. Now, when folks really love their organization they call it “culture,” and when they hate it, they blame it on others.

In Computer Science we organize data all over the place. For example, security is built on the idea of organization as it relates to relationships and whether we allow or deny them. On one side of the organizational fence we have blacklists and on the other side, we’ve got whitelists. Remember, all we are trying to do is permit or deny traffic. 


Blacklists are part of a threat-centric model in which you allow all data to flow, except for exactly what you say should be stopped. The problem here is that since zero-day attacks are, by definition, not known, they are allowed in by default and are transparent, just like a false positive. 

Blacklists also tend to be resource-intensive. Having to read in an entire file then determine to permit or deny monolithically takes a lot of CPU cycles. And, keeping them up to date, requires either regular manual updates or a dynamic service.

A whitelist follows a trust-centric model that denies everything and only permits what you explicitly allow—a better choice in today’s data centers. Let’s face it, the list of what you do want to connect in your data center is much smaller than what you do not want to connect, right? This immediately cuts back, if not eliminates, false positives.

Whitelists are light on system resources, which makes them perfect for servers. They read the metadata of a flow, index it by file name, then permit or deny at the local source. Simple and quick. However, the Achilles’ heel for whitelists is managing them. Consider that you are basically managing every possible flow of traffic to and from every possible workload in every possible combination. Whitelists are great for sure, but man alive do you need a centralized controller. 


Of course, there is a gray area. As with any IT example, there should always be a shout out to Kuipers’ axiom, which states, “In most ways, and at most times, the world changes continuously.” Or as we call it, “It depends.” 

Access Control Lists are the “it depends” in this equation because technically they can be used as either black or white lists. (If you just started singing the Michael Jackson tune, you’re awesome!) As any Networking 101 student can attest, ACLs have an implicit “deny any any” at the end, which makes it a whitelist. However, as a common best practice we put DENY statements in the ACL with a “permit any any” at the end, which turns it into a blacklist.


Security is like a great piece of red velvet cake–layers make all the difference. No single solution is going to be the end all to be all. Honestly, blacklists are a lot less work in theory. The problem is that as threats increase, blacklists become less and less effective. They are more prone to errors and require more maintenance in the long term. 

Blacklists have their place at the perimeter of the network for north-south data flows, where the boundaries are more static and act as a coarse-grained filter. But inside the data center is where the majority of traffic flows. Fine-grained control I required here to protect workloads that are moving all over the place, changing IP addresses, spinning applications up and back down, etc. Whitelists are the perfect solution for east-west data flow. By default, they trust nothing. 

My Dad used to say, “When all you have is a hammer, everything is a nail.” In today’s massively scalable and flexible data center, it’s time to put the hammer away and go grab the precision tools.

Adaptive Segmentationmicro-segmentation
Share this post: