/
Cyber Resilience

Exploring the Use of NGFW Functionality in a Microsegmentation Environment

For nearly two decades, next-generation firewalls (NGFWs) have been an essential security tool. But as today’s networks become increasingly complex, the perimeter protection offered by NGFWs solves a problem that is becoming increasingly less relevant.

Illumio is researching the possibilities of implementing NGFW features in a microsegmentation environment, combining the two technologies to offer the kind of security required by complex networks.

In part one, I overviewed the history, value, and challenges of next-generation firewalls (NGFWs).

In this second article, I’ll talk about the “what if?” scenario of embedding a subset of NGFW functionality into a microsegmentation solution. I’ll talk about different use cases, and which NGFW features might be suitable for each.

NGFWs work for north-south traffic – but struggle with east-west

The NGFW was designed around the idea of protecting the perimeter of a network, and largely around protecting against threats in incoming traffic. In the network world, this type of traffic is often referred to as “north-south.” This terminology stems from the widespread practice of drawing a network with the Internet “bubble” at the top, with traffic flowing into the data center traveling from top to bottom, or north to south. Traffic inside the data center is typically drawn as moving laterally, left to right or right to left, and thus often termed “east-west.”

Using this terminology, it can be said that there is a powerful use case for NGFWs used in a north-south role, as I talked about in part one. But the use case for east-west is a little less certain. This second statement probably raised some hairs on the back of your neck, so let me be a little more specific about that statement.

Firewalls cost three kinds of money: hardware, maintenance/support, and configuration/monitoring. Despite the high cost in all three categories, the ROI for NGFWs is pretty clear-cut for the north-south use case. When it comes to east-west, it turns out that only a subset of full NGFW capabilities are relevant, but the vendors don’t give you a discount for not using the full feature set. It’s often difficult to justify a full NGFW appliance purchase and use only half the functionality, even more so in the cases where the NGFW feature set is not mandated by law or regulation.

NGFWs for south-north traffic

That’s two of the good use cases for an NGFW, but there’s actually a third that people rarely consider, except in passing: the south-north use case, or in English, controlling outbound traffic from inside the network. The NGFW vendors talk about it, but only a little. And most organizations, while aware of the threat of unrestricted outbound connections, do remarkably little to actually address it. In working with many customers over the years, I’ve found that most organizations do not even have a process in place for their internal application owners to request outbound controls at the network border.

My job is basically R&D, with a heavy focus on the “R” part. In that vein, let’s do a thought experiment. For a moment, consider the north-south problem solved. It’s not solved in the sense that there is no 100% flawless solution, but it is in the sense that most organizations no longer consider that path to be the primary avenue of attack into their networks. Instead, let’s think about how networks could be made more secure if you could implement selected NGFW features into your microsegmentation solution and improve both your east-west and south-north NGFW controls, without having to buy more equipment or have to fight your own internal organizational processes that prevent you from taking advantage of outbound NGFW features.

The south-north and east-west use cases are different, but there is considerable overlap. Additionally, many north-south features are simply not relevant to either of these. Let’s begin with the east-west use case.

As I said earlier, there is certainly a use case for a limited subset of east-west NGFW controls. The ROI for a full-blown appliance (or virtual appliance) might be questionable, given the cost, but the need is nevertheless real. If your network contains PII, HIPPA, or PCI data, you’re almost certain to be subject to laws and regulations regarding protection of that data. In many cases, this includes an explicit requirement to implement traditional NGFW services such as DLP (Data Loss Prevention) and IDS/IPS (Intrusion Detection/Prevention Service). Even if there is no mandate, it remains a best practice. Application ID, in other words, the ability to block or allow traffic based on the actual type of traffic, as opposed to port and protocol, is also a powerful and desireable tool to prevent attacks and data exfiltration.

For the south-north use case, a few additional features might be helpful. DLP is probably still needed, and Application ID is equally useful for this use case, but to that I’d add URL filtering and the ability to control traffic based on destination IP reputation and geography. Sure, your border NGFW can already do this, but as I pointed out earlier, there’s often no way for an application owner to take advantage of these features if the border devices are not under their control. And they rarely are in a large data center environment.

Most of the other NGFW services have limited value for east-west or south-north. DDoS and QoS make little sense inside of a network. Likewise, modern AV software running within the OS is probably more efficient than a network-based solution, so network-based anti-virus is probably not on the agenda.

The performance of NGFW features on endpoint devices

It’s time to talk about the performance implications of NGFW features running on endpoints. If you recall, part one mentioned NGFW appliances being almost supercomputer-class systems with lots of specialized hardware to get respectable performance. It obviously follows that a substantial performance penalty would be imposed on individual servers when implementing the same functionality. Luckily, this appears it might be one of those times when intuition goes out the window. Let’s talk about why.

IDS/IPS is a great place to start. Of all the NGFW services, IDS/IPS is considered to be one of the "heaviest," meaning it consumes a disproportionate number of resources and is one of the reasons for the large amount of custom silicon in an NGFW appliance. If I’m protecting a moderate-sized data center of 1,000 workloads with an IDS/IPS solution, I probably need to support IDS/IPS signatures for at least a dozen different operating systems (Windows 2008, 2012, 2016, 2019, at least a half dozen variations and versions of CentOS, RedHat, and Ubuntu (plus possibly Solaris or AIX, if I’m in healthcare or banking). Each of those 1,000 servers runs at least one service I will want to watch for, possibly as many as three or four different services each, all of which have potential vulnerabilities. And with a dozen operating systems, I might be running a dozen different versions of each of those three or four services, each of which have different vulnerabilities.

In short, I am watching for somewhere between 10,000 and 100,000 vulnerability signatures for those thousand machines. And I am looking for signs of those in every single packet that flows through my NGFW network device – on every possible port they may be operating. This is clearly not a load we want to impose on every server in the data center.

In practice, we don't need to. There is no reason to look for Windows vulnerabilities on a Linux host. There is no need to look for apache2 vulnerabilities on a machine running NGINX. There is no need to look for Application X version 1.0, 1.1, 1.2, 1.3, 2.0, 2.1 vulnerabilities on a system running Application X version  2.2.

Instead of looking for 10,000 to 100,000 vulnerabilities in every single packet, we look for maybe four. Not 4,000. Four. And four is a solvable problem.

How? Because by virtue of having an agent on every server, we have full access to understand the OS, what patches have and have not been applied, what software (and what versions of that software) are installed and running, and specifically what ports they communicate on. We look for vulnerabilities specific to the OS and software versions detected, specifically on the ports the relevant processes are bound to. We reduce the search space by something like four orders of magnitude. And four orders of magnitude is a spectacularly large number in computer science. It’s the difference between hard and easy.

Similar strategies could be applied to services like DLP and URL filtering. It’s not necessary to filter every packet on every server for restricted DLP content, nor to hold massive databases of URLs or IP information for public addresses on every server. In the case of DLP, you search only for specific content on a very specific set of servers based on workload labels, in the same manner that segmentation policy is applied. For URL filtering, the large database of IP characteristics can be kept in the central policy management system, fetched over a low-latency LAN connection when needed, and cached locally for subsequent lookups. Most servers talk to the same relatively small set of servers over and over.

NGFW features for a microsegmentation solution

When adding NGFW features to a microsegmentation solution, one of the benefits you gain most from is that just like firewall policy, the NGFW features are applied surgically, precisely where you need them, rather than to whole VLANs or subnets as a group. A label-based policy allows the application owner to apply very specific services surgically, precisely where needed instead of painting the datacenter with a broad brush. Specific NGFW features can be turned on only for the servers necessary, and only performing precisely the required inspection. This keeps overhead to the absolute minimum required to meet your specific security needs, and allows you to balance security, performance, and cost.

Remember, the objective here is not to replace your border NGFW devices. Rather, it is to selectively fill in the gaps where existing NGFW solutions don’t make architectural or financial sense with a powerful subset of NGFW features running on the servers themselves. This approach allows application owners to “own” their outbound security where it might otherwise not be possible, as well as to offer these features in situations that are otherwise cost-prohibitive using traditional solutions.

Looking ahead

To tie this up, let us look even further into the future.

TLS 1.3 was ratified in 2018 and is slowly becoming the next standard for encrypted web and other services. Your initial reaction to this might be, “Not my problem” or maybe “So what?” I’d argue that it’s actually extremely relevant. A NGFW cannot provide most available services without Deep Packet Inspection (DPI). And for DPI to be in any way meaningful, the data must be in cleartext, not encrypted.

When NGFWs first hit the market, only a tiny fraction of web traffic was encrypted. As time went on, more and more traffic moved to HTTPS, or encrypted traffic. Today, nearly 100% of all web traffic is encrypted, and therefore cannot be analyzed for malware, viruses, data exfiltration, or any other NGFW server. The solution that was developed for this is called TLS MiTM (man-in-the-middle).

Setting up TLS MiTM is a bit tedious, though straightforward in concept. There are a number of moving parts to the solution. First, the organization creates an internal TLS certificate. The public key is pushed to all systems (laptops, desktops, servers, etc.) within the organization, and each operating system is configured to trust that certificate and use it to encrypt all outbound communications, regardless of destination. The private key is then distributed to your perimeter NGFW devices, which are configured as transparent web proxies.  

When a user (or server or any other device) makes an outbound connection to an external web site, let’s say gmail.com for this example, instead of using the Google TLS certificate, it encrypts traffic with the organization’s internal certificate. When the perimeter NGFW sees that outgoing traffic, it’s able to decrypt it and fully analyze the contents of the traffic by virtue of having a copy of the private key. The NGFW terminates the internal connection and originates a new TLS connection to gmail.com using the Google certificate, and proxies the content of the two connections (the internal connection from inside the organization to the external connection to gmail) and is thus able to view and analyze all traffic, even though it’s encrypted.

While cumbersome and CPU-intensive, this method has worked reasonably well for most services for about a decade using SSL, then TLS 1.0, 1.1, and 1.2.

So far, so good. But TLS 1.3 changes the game. First, TLS 1.3 mandates Perfect Forward Secrecy, in the form of per-connection DH key exchanges. Because of this, a passive device has no way to decrypt the payload, even with access to the private key in use. With TLS 1.3, it’s mandatory to insert a device into the stream and proxy the traffic. Second, TLS 1.3 deprecates the lower-security ciphers, removing the ability for a proxy device to demote a proxied connection to TLS 1.2, which is a common strategy often employed to save on compute resources in the NGFW (because lower security ciphers typically require less computation).

If the history of cryptography has taught us anything, it is that old, trusted standards tend to be found vulnerable over time with almost 100% certainty. The current practice of demoting TLS 1.3 to TLS 1.2 to allow for passive decryption and/or demotion to save resources is on a timer, just waiting for TLS 1.2 to be deprecated. When that day comes, a great many passive inspection devices will become expensive paperweights, while many inline solutions quickly become overwhelmed by virtue of being forced to use more computationally expensive cryptography.

A dirty little secret of the NGFW world is that, at least at the time of this writing, your WebSocket traffic is probably not being inspected for threats of any kind. Why? Because a typical NGFW can’t decrypt the traffic in real time. WebSocket traffic must be viewed within your browser (using Developer Tools) or captured and decrypted after the fact using something like Wireshark (assuming you have the private keys) in order to inspect the payload. WebSockets are becoming increasingly common in web applications, as the technology provides a great solution for JavaScript applications to move data back and forth between your browser and a web server. Quite literally anything can be moved across a WebSocket connection, and it’s completely opaque to your NGFW.

Last, let’s not forget other pervasive new technologies such as the use of QUIC for web traffic. While QUIC is a powerful new tool for getting traffic to your browser faster and more efficiently, it does not use standard TLS encryption. This means that your in-line NGFW must either block all QUIC traffic (forcing a downgrade to TLS) or allow the traffic to go through uninspected. The first solution reduces the quality of the user experience, and the second exposes the organization to risk. Neither is ideal.

Handling some NGFW tasks at the workload level helps prolong the lifespan of the investment in existing NGFW appliances. It allows for the offload of some percentage of computationally expensive processes by handling them on a per-workload basis. This allows the customer to offload part of the inspection payload from his network devices, thus delaying an otherwise necessary firewall upgrade, and at the same time, bringing the benefits of Zero Trust to portions of his network that might not otherwise make technical or financial sense.

Related topics

Related articles

Cyber Resilience Approaches, New Illumio Tools, and the Hacking Humans Podcast
Cyber Resilience

Cyber Resilience Approaches, New Illumio Tools, and the Hacking Humans Podcast

Illumio's April news coverage shows the innovative work Illumio is doing to be a leader in the security industry.

Data Center Security — The Great Divide
Cyber Resilience

Data Center Security — The Great Divide

Why an intelligent system incorporating dynamic data center security protocols is key to mitigating security risks.

Learnings From MOVEit: How Organizations Can Build Resilience
Cyber Resilience

Learnings From MOVEit: How Organizations Can Build Resilience

Learn how to protect your organization from the new zero-day vulnerability in the MOVEit file transfer application.

No items found.

Assume Breach.
Minimize Impact.
Increase Resilience.

Ready to learn more about Zero Trust Segmentation?