Secure Beyond Breach
Chapter 7


Considerations for Cloud and Containers

In this chapter:

  • Why micro-segmentation should be part and parcel of building new applications in the public cloud and in containers

  • Challenges and solutions for segmentation in the cloud

  • How to approach container-level segmentation

Cloud and Containers

“The reason that God was able to create the world in seven days is that he didn’t have to worry about the installed base.”

—Enzo Torresi

Organizations today can use hundreds, possibly thousands, of applications to run their business. Some build their own applications, increasing the organizational dependency on their IT environment. On-demand compute environments (public cloud) and container-based computing seek to enable efficiency, flexibility, and speed while decreasing the need for large upfront capital outlay. These two monumental shifts present new challenges and opportunities for segmentation, as the benefits of using these services gets weighed against the constant driving need for security both within applications and across the entire compute estate. Micro-segmentation for public cloud and containers requires additional consideration since they are in a different environment than applications running on bare-metal servers or virtual machines in on-premise data centers.

Before exploring this scenario, let’s first further define what these shifts mean.

First, let’s consider public cloud adoption. New applications are being built “cloud first,” and old applications are being migrated to public cloud infrastructures provided by vendors like Amazon Web Services, Microsoft Azure, and Google Cloud Platform. A public cloud allows organizations to bring up on- demand compute infrastructure for their applications and then destroy it when they are done using it – all without having to own and manage any infrastructure. The availability of on- demand compute allows application teams to build and deploy business applications faster, thereby enabling quicker time to market without depending on the IT team. The on-demand compute option also allows the IT teams to minimize the capital expenditures required to build and operate data centers (shifting it to an operational expense).

A second shift is driven by the adoption of container-based computing. Organizations are building and running applications inside containers instead of running them as processes inside an operating system on a bare-metal server or a virtual machine. Docker containers allow developers to deliver changes from development to production in a fraction of the time using continuous integration and continuous delivery (CI/CD) pipelines. Once an application is in a container, it can be ported into entirely different environments, on-premise data centers or public clouds, optimizing the benefits of a hybrid environment. The portability of these containerized applications also reduces the dependency on the operating systems, which minimizes the probability of breaking an application because of a change in the operating system.

Both these shifts present new opportunities when it comes to infrastructure security through segmentation. They allow organizations to settle the long-standing tug-of-war between the application teams, who are trying to build and deploy faster and faster, and security teams, who own the responsibility of maintaining the security posture of applications.

A little bit of upfront planning yields big results.

Organizations can begin with micro-segmentation in mind as they build new applications in the public cloud and in containers because they are not subject to legacy challenges. They can also bake micro-segmentation into the software development lifecycle (SDLC) instead of deploying it later, so application teams can continue to move fast yet stay secure.

Traditional segmentation approaches present clear challenges in these fast-moving environments. Using network-based hardware devices such as switches and firewall boxes isn’t possible, as they cannot be deployed across a public cloud. Hypervisor-based solutions are also not feasible, as control over the hypervisor in the public cloud doesn’t exist. Similarly, multiple containers running different applications can run inside a server (physical or virtual), making it unfeasible to segment those applications using network or hypervisor-based approaches.

Public cloud providers and container orchestration systems have rudimentary solutions for segmentation. Most organizations also end up using multiple public cloud providers and still have some bare-metal servers and virtual machines in on-premise data centers for applications that are not suitable to run in public clouds or containers. It is a challenge to manage multiple different segmentation strategies across multiple platforms.

It is also important to note that while an organization running applications in a public cloud does not have to pay for the infrastructure, there are cost tipping points where it is actually more expensive to run them in a public cloud. So security policy portability becomes just as important as container portability since the application may move either from cloud to cloud or from cloud to on-premise, or simply require a hybrid infrastructure approach in which applications span public cloud infrastructure and traditional infrastructure.

A new approach, then, must be defined to meet this challenge. The goal is to enforce segmentation policies as close to the application workload as possible – with limited reliance on the public cloud infrastructure. Therefore, the operating system for applications running on virtual machines in a public cloud, and in containers for containerized applications, becomes the optimal location for visibility and enforcement.

To read the rest of this chapter, download the PDF below.

Check out Chapter 8: Sustainment

 

 

 

Ebook

Secure Beyond Breach

Building a Defense-in-Depth Cybersecurity Strategy through Security segmentation

secure-beyond-breach

Swag Request

Illumio Free Trial