Today’s data centers are evolving and increasingly moving to a hybrid cloud architecture, enabling the network to become more agile, and to offer greater IT efficiency with the adoption of virtualization, software-defined networking (SDN), and hyper-converged infrastructure (HCI). In fact, IBM believes that the hybrid cloud market will reach $1 trillion by 2020.
This shift changes the narrative around compute, storage, and network resources, abstracting them above a physical location while also enabling programmability and network automation that provides on-demand delivery of workloads across on-prem and multi-cloud environments.
A data center is a physical environment that can either be located and managed on-prem or across multiple public cloud locations. But now, we’re finding that physical location is less relevant than how developer and user resources are virtualized, deployed, automated, and scaled out.
In short, private, public, and hybrid clouds now serve as the through line between compute, storage, and networking.
Security is too often an after-thought
While data centers have gone through this evolution, a strong and secure perimeter is no longer enough since it is difficult to extend security out to the cloud where much of your critical resources are located or hosted.
Although it is commonly agreed that security needs to evolve along with all other data center resources, in reality, security often still includes high-performance network firewalls deployed at the network perimeter as well as between critical resources like databases and user segments. Along with firewalls, organizations often deploy various security appliances to scan for malware, viruses, or URL filtering, but still place them in-line with network perimeters, adhering to legacy and outdated assumptions that threats reside “outside” and everything within a network perimeter can be trusted.
Security needs to address several requirements. Among these are the ability to “scale up” and “scale out.” Scaling up refers to increasing the power of a given resource, and scaling out refers to the ability to to deploy several on-demand instances of a resource and load balance across them, all in an automated fashion across the network.
Zero Trust means that the security boundary is wherever the workload is
Today, the vast majority of workloads are virtualized, and organizations often use more than one hypervisor in their infrastructure stack.
What’s more, SDN is increasingly deployed as part of a data center network infrastructure upgrade or as part of a network virtualization deployment.
In addition, HCI enables convergence of compute, storage, and network resources as part of an integrated virtualized system for on-demand scaling of resources, designed to better support the needs of agile and highly distributed application workloads.
Today’s data centers are highly dynamic and incredibly complex. Organizations need a new approach to security that enables speed, while also addressing new requirements and changes to the underlying architecture that follows workloads both on-prem and in cloud.
The Line of Business (LoB) is looking to accelerate the development of new applications in less time. This approach is part of the DevOps philosophy, which enables business agility to meet varying resource requirements to run a business. The Continuous Integration and Continuous Delivery (CI/CD) pipeline addresses the entire software development cycle from initial code to testing to production and hosting. This allows application software development to move quickly from a monolithic approach (where the application is tied to the siloed application infrastructure) to a microservices approach, which is highly distributed across multiple on-prem and cloud environments that use containers and associated orchestration tools like Kubernetes and API-driven workflows.
These new application technology trends are transforming how today’s applications are designed.
The result is a hybrid cloud, deployed across both on-premises and public cloud fabrics, that enables workload mobility and automation. Multi-cloud provisioning tools make it easier for DevOps teams to provision application workloads across multiple on-prem and public cloud environments. They also help organizations facilitate a “lift and shift” migration of specific applications or processes across multiple on-prem data centers and clouds.
The old security trust-boundary was the data center perimeter, with everything within the data center being trusted and everything outside of it being untrusted. But in the modern world of virtualized and dynamic workloads, every workload is a trust-boundary. Everything else, including the data center and cloud core network, is considered untrusted. Zero Trust means that the trust-boundary is not in any specific location in the network, but is instead where ever the workload touches the network, even as it moves dynamically across the network.
These data center trends also extend to the amount of traffic within a given data center. If you look at the traffic flows in and out of your data centers (across north-south traffic flows), as well as the traffic inside your data centers across your application workloads (also known as server-to-server or east-west traffic), the vast majority of this traffic is east-west. What this means is that legacy approaches to security are not enough, and focusing on the network perimeter alone as the trust-boundary means you will be blind to the vast majority of traffic. A Zero Trust approach helps to avoid blind spots by extending comprehensive visibility and security controls across workloads in a data center and hybrid cloud architecture.
Security needs to address modern workloads
The reality is that workloads are everywhere and they no longer stay in one place. They are ephemeral as they move across on-prem and multi-cloud environments. And most applications are hosted in a heterogenous environment that also consist of different workload types across bare-metal servers, hypervisors, VMs, and as well as containers.
Security needs to address this dynamic nature of on-demand workloads.
The goals of an effective security architecture can be broken down into three basic categories:
- Visibility to see everything inside the data center and across multi-cloud environments to understand what is talking to what, and why, across network flows, devices, applications, and workloads.
- Segmentation to reduce the attack surface across east-west traffic with application whitelisting and micro-segmentation.
- Orchestration to easily integrate with existing 3rd party tools and infrastructure to help automate and enforce consistent workload security everywhere.
An effective data center and hybrid cloud security architecture needs to see everything, across all workloads and applications. This means that the security architecture should not be dependent on how the underlying network fabric is deployed.
One aspect of a Zero Trust architecture is segmenting along the boundaries of every single workload via micro-segmentation. Segmentation can no longer be defined as the boundaries along VLANs or IP subnets, but rather every single workload must be treated as its own boundary – especially since resources in modern data centers and clouds are automated and orchestrated dynamically.
Therefore, organizations need to reduce security silos with robust APIs to enforce consistent workload security across the entire fabric.
To sum it up, today’s data centers are changing. Complexity is the biggest challenge. Data center teams need a simpler, unified approach to enable effective workload protection everywhere while strengthening overall security across a fast, secure, and reliable environment to mitigate risk and business disruptions from security breaches.
Check out these videos to learn more about the approaches to take to secure modern workloads:
- The Evolution of Segmentation and the benefits of a host-based architecture
- Beyond the Hype: Conversations on Mobilizing Zero Trust with Forrester Principal Analyst, Chase Cunningham