Hybrid cloud network fabrics solve a lot of challenges in enterprise networks today. They enable the creation of a single network architecture, which is abstracted above underlying data center environments. They also enable fault tolerance, resiliency, “elastic” services across data centers, and a network topology that is not dependent on any one underlying network vendor. An enterprise may manage multiple physical network environments, but can they create a single network fabric across all of them?
For example, an enterprise can maintain dual data centers (one primary and another for disaster recovery), in addition to multiple public cloud deployments on AWS, Azure, and/or GCP. Each of these hosting environments deploy their own underlying physical network architectures with their own specific technologies, often with different routing and switching vendors.
But with the advent of Software-Defined Networking (SDN) across the networking industry, the majority of networking vendors today can create their very own “cloud.” In networking terminology, a cloud is essentially a virtual network, one where the network topology is abstracted above the physical topology, often called an “overlay network.” And an overlay network is essentially a collection of tunnels, all managed by a central SDN controller. This cloud can create a single, unified network fabric across all of the managed hosting environments.
A tunneling protocol, such as VXLAN, is used to create the overlay network topology, so that the path of traffic is determined by how these tunnels are deployed rather than how the underlying physical topology is deployed. All networking vendors use different terms to describe their SDN-based approach to enable this, but at the end of the day, they are all basically doing the same thing: using tunnels to create an overlay network that is automated and managed by a central controller.
While many of these vendors have the ability to extend overlay networks from on-premises data centers out to public cloud vendors, the reality is that many enterprises only deploy overlay networks locally in their on-prem data centers.
Instead of extending their on-prem SDN solution out to the public cloud (for example, extending their tunneled network topology out to AWS or Azure), many enterprises create locally-managed environments in their public cloud instances and manage these networks differently from how they manage their on-prem data center networks. They will often choose to use the networking approaches already being used in each public cloud vendor, such as VPCs in AWS and VNet in Azure, and then manage the SDN solutions in their on-prem data centers differently.
The result is a hybrid cloud network deployment that is managed in a “swivel-chair” approach. The network manager will swivel their chair back and forth between tools used to manage their on-prem data center and other tools used to manage their public cloud networks. The term hybrid cloud often implies a single, unified architecture, which is often not accurate when it comes to operations.
Securing your hybrid environment
This fragmented approach to network management is especially true of security across a hybrid cloud. Just as locally-managed network tools are used in each environment across on-premises data centers and public clouds, the same is true for security solutions. Most SDN network vendors have their own basic, integrated security tools that can be used to enable some level of policy enforcement in their locally managed environment. And all of the major public cloud vendors have their own security solutions, enabling basic levels of policy enforcement as well.
But, unfortunately, these security tools are siloed in the overall architecture. Each security solution does not play well in the sandbox with others, and correlating a security breach between all of them is a cumbersome task that introduces large delays in resolution.
Additionally, if workloads are live-migrated between environments, such as between an on-prem data center and a public cloud instance, the policy enforcement is not usually going to follow that workload out to its destination. This results in the need for manual approaches to somehow transfer policy between security tools in each environment or reliance on a SIEM solution to feed information from all environments, which requires a great deal of upfront scripting work.
Due to operational complexity, these steps are simply not often done, resulting in significant gaps in security and workload visibility across the hybrid cloud.
Instead of relying on the siloed, native security tools in each network environment in a hybrid cloud to enforce security for your workloads, why not instead simply enforce security and visibility directly at the workload, abstracted from the underlying networking tools?
Similar to how overlay networks create topologies that are abstracted above dependencies on the underlying routers and switches, why not apply the same approach to security? Security should also be abstracted – virtualized – from underlying network fabric dependencies and enabled directly at the workload, regardless of where that workload is hosted or where it live-migrates to during its lifecycle. Similar to how overlay networks enable one consistent approach to networking in a hybrid cloud, security should be designed in the same way.
How microsegmentation can help
At Illumio, we apply security (micro-segmentation) directly at every single workload across an entire hybrid cloud. Since micro-segmentation policy should be enforced as close as possible to the entity you are trying to protect, we deploy a lightweight agent on every workload, virtual or bare-metal:
- This agent is not in-line to any traffic. It simply resides in the background and monitors application behavior directly on the workload.
- Information is then sent back to the Policy Compute Engine (PCE) to enable clear visibility into behavior between all workloads, regardless of where they are hosted, or which network environments they live-migrate between. Essentially, we virtualize security, abstracted from underlying networking dependencies.
- The central PCE uses simple labels to define what needs to be segmented, since defining policy against IP addresses doesn’t scale in architectures where workloads live-migrate dynamically and IP addresses change.
While the PCE creates a virtualized security architecture, it can also “reach down” into the network layer and configure access lists to hardware switches in an on-premises data center, if needed. So, while security is virtualized and abstracted above the network, Illumio can apply both workload and network security from one central policy operational model.
If you define the basic, essential resources in any data center or hybrid cloud as being compute, storage, and network, all three of these resources are now commonly virtualized and abstracted above underlying resources.
The fourth essential data center resource is security, and it should similarly be virtualized and underlying resource dependencies. Hybrid cloud should not equal hybrid security. Security should span across the entire network cloud fabric, and workload security should be enforced directly at the workload, as one unified security “fabric” across all network fabrics. Virtualizing security frees the underlying network architects to focus on network priorities, and places workload security where it belongs: directly at the workload.
Learn more about how this approach makes it to easier to manage security across your hybrid cloud environments.