Security and networking have long been a source of conflicting priorities. When designing either a traditional data center or hybrid cloud fabric, the priority of the network architecture is reliable and resilient delivery of traffic. The network is, perhaps, the most critical resource in any data center or cloud architecture. After all, workloads, clients, and databases cannot communicate with each other unless there is a network between them. And all networks operate by making forwarding decisions based on analyzing the headers of packets and various other metrics. What’s more, routers and switches exchange information on Layer-2 and Layer-3 concepts, both of which are largely agnostic to the application-specific details contained deep in the data payload of packets. Networks are primarily focused on moving traffic reliably and quickly, rather than on what application data is contained in that traffic.
So when it comes to securing those applications – and their workloads – why do we still consider approaches that are tied to the network? Let’s explore how to evolve your approach so that the network is no longer a roadblock to agile workload delivery, automation, and security.
Inner Workings of Modern Workloads
Workloads, which communicate across any network, are designed to do so based on application-specific priorities. What a workload is doing is more important than what the underlying network fabric looks like. Clients communicate with workloads, and workloads communicate with databases based on concepts that are not usually dependent on details contained in network packet headers.
Unlike traditional, bare-metal workloads, modern workloads are largely abstracted above the underlying network or server resources. It is assumed that a workload’s presence on any underlying resource is transient, and can be dynamically live-migrated across hosts, across data centers, and across clouds. These migrations usually occur dynamically, without manual human direction. Due to this abstraction of workloads above underlying resources, it is no longer realistic to consider a workload’s IP address as being a useful form of identity for the life of that workload. A workload may have many different IP addresses across its lifecycle, and this directly affects how security boundaries are defined across modern networks.
Traditional Network Segmentation and Firewalls
Changes to networks are traditionally slow, by design, due to the critical nature of network fabrics. Many data center networks are largely flat, and many public cloud network fabrics contain only coarse levels of segmentation. When networks are segmented in any environment, it is usually done for network-specific priorities, such as to create isolation across broad categories of resources, like a DMZ, a WAN segment, a Campus segment, or the traditional Web/App/Dev segments. Other reasons for segmenting a network are to optimize network performance, increase throughput, implement path redundancy, or to make tasks like route summarization, Spanning Tree, and ECMP more efficient.
Network segments are traditionally implemented by creating VLANs and subnets, either across legacy “underlay” networks or across “overlay” networks, implemented using SDN controllers and tunneling such as VXLAN. Regardless of if the topology is underlay or overlay, all of these network segments terminate at routers or switches, either hardware or virtual. And implementing security across those segments is commonly done by using network firewalls.
Firewalls traditionally view any segment as either a group of IP ranges along with associated TCP/UDP ports or as zones, which are a collection of segments, along with the ports to block or allow between relevant IP ranges. Network firewalls don’t implement security based on the application-specific content of a packet’s data payload. Firewalls consider a packet’s destination or source IP address and port as a workload’s identity. Even with modern “next generation” firewalls, which are capable of making decisions based on application data contained deep in the packet, the majority of firewalls rules are still configured along traditional IP and port ranges. Old habits die hard.
Breaking with Traditions
The DevOps philosophy places a strong emphasis on speed of deployment and on automation. And, as mentioned, changes to network segments and firewalls are usually slow and manual. Automating changes to networks and security often runs into operational barriers, which can be challenging to modify. The result is that security is often an afterthought since it simply slows down processes. Usually, workloads are deployed quickly and agilely, and security returns as a priority only after a breach occurs and the business faces litigation. No one wants to be the person to explain to the CEO why security was not a high priority and why their business is now being sued.
Amazon famously expressed this conflict of priorities between workload agility and network changes back in 2012 by saying, “The network is in my way.” The network, and changing network segments, are roadblocks to agile workload deployments. So, network segmentation is often not done, or it is done in very coarse way by networking teams.
But what if network segmentation and security could be implemented directly from within the workload? No more waiting for network operations to implement segmentation in the underlying network fabric.
Instead, what if you could implement the required segments directly from within the same agile process as deploying a workload via the DevOps process?
And, more importantly, what if security between these segments could be defined using natural language policy, rather than relying on outdated IP/port firewall rules? No more policy defined against source IP and ports pointing to destination IP and ports, since these are no longer tied to workloads throughout their lifecycles.
Instead, what if you could write a policy that reflects the way users perceive the resources, such as “Only Web servers deployed in New York can communicate with Database servers in London?”
And what if you could define this in a granular approach, achieving a true “micro-segmented” Zero Trust approach, such as “Only Webserver-1 can talk to Webserver-2 within the same location”?
There are four broad layers in a network architecture where policy can be applied, as illustrated in this diagram:
As you rise up the layers, policy is expressed in more natural language, agnostic to the lower layers. Applying workload policy directly at the workloads frees up the lower layers to focus on network priorities.
Allowing workload-layer tools to define segmentation and enforcement between workloads, in a way that is abstracted above the underlying network fabric, frees network operations teams from having application requirements influence network design. Pushing application segmentation and enforcement “up” to the workload layer allows network operations teams to design network fabrics along network priorities.
Firewalls will still be used to create broad segments across the fabric, as has always been done, but there is no longer any need to create an unwieldy number of VLANs or subnets to accommodate application segmentation requirements. Network architects can instead focus on network priorities when designing network segmentation, such as throughput, redundancy, route summarization, Spanning Tree, and ECMP. Application segmentation no longer needs to add headaches to network design. Having workloads create and enforce segmentation boundaries also frees the network from being the first in line when causes are looked for when troubleshooting security problems.
Modern Segmentation for Modern Workloads
Illumio’s Adaptive Security Platform (ASP) enables micro-segmentation between workloads, essential to building a true Zero Trust architecture, and uses natural language expressions to define policy between those workloads. It gives you an application dependency map that provides a clear picture of exactly what workloads are communicating amongst themselves and who is initiating connections with whom – across your entire hybrid cloud fabric. And while you have full visibility into the IP addressing used by workloads, policy does not – and should not – be defined against IP addressing since the association between network addressing and applications is transient.
Illumio uses labels to identify workloads against criteria that is abstracted above whatever segment of the underlying hybrid cloud network segments they are hosted on:
- These labels include metadata that is associated with workloads, regardless of their current IP addressing.
- The labels are Role, Application, Environment, and Location (“RAEL”).
- They are used to define segments between workloads, and enforcement between these labeled workloads are defined using natural language expressions, such as Web workloads can communicate with App workloads, but only App workloads can communicate with Database workloads. Policy is not specific to IP addressing.
- Illumio then translates these label-based policy rules into configurations specific to the network filtering capabilities of whatever OS is currently running those workloads – either Linux iptables and ipsets, Windows Filtering Platform (WFP), or the IPFilter state table for Solaris and AIX workloads.
Since Illumio allows you to define policy in a way that is totally abstracted above how and where a workload is hosted, the result is that networking priorities and application priorities are no longer in conflict.
Summing it Up
In modern data center and hybrid cloud network architectures, the perimeter is simply defined as wherever your workload is currently hosted, and that workload can move dynamically across any segment of the cloud. The legacy definition of the perimeter as being between the data center and the Internet is no longer relevant, and trying to architect the network fabric to enable micro-segmentation across application boundaries is challenging to scale. SDN solutions using controllers and overlay networks that terminate in hypervisors effectively move the boundary between the network and workloads up into the host, but they still rely on defining segments from “the bottom up”: from the network layer to solve a problem up in the workload layer.
A much more scalable approach in modern cloud fabrics is to go to the workload to create micro-segments and enforce policies that are relevant to workloads, thus freeing network segmentation to be defined along priorities that are relevant to network design. The network is no longer the roadblock to application workload agility and security. And the network is no longer first in line when application security troubleshooting occurs, which reduces finger-pointing during incident responses.