Illumio Blog
October 23, 2015

AWS Re:Invent 2015 and the Evolution of Containers

Mukesh Gupta,

Find me on:

 

At this year’s fourth annual AWS re:Invent conference (my third time in attendance), I focused my time on digging deeper into containers—arguably today’s hottest topic. I, along with a few of my colleagues, had been experimenting with containers for a few months. I wanted to attend a few sessions and have engaging conversations with people who would come to the Illumio booth to figure out where the industry was headed.

So, what did I learn?

Illumio at AWS re:Invent 2015

Containers are being adopted quickly—but why?

More people than I expected asked me how the Illumio Adaptive Security Platform (ASP) would work for their containerized environments. Most of these people were using containers in their labs or for building engineering tools, but a few of the people we spoke to were running their production applications using containers.

A common misconception, and one I had started with earlier this year as well, is that the primary driver for the adoption of containers is that it enables a reduction of the “hypervisor tax.” (Since containers are much lighter than virtual machines, you can run a lot more containers than VMs on a physical server, thereby achieving better utilization of your hardware resources.) However, I have come to realize that container adoption is actually being driven by the agility and simplicity that containers bring to developing, deploying, and maintaining application software.

Container adoption is actually being driven by the agility and simplicity that containers bring to developing, deploying, and maintaining application software.

If you have ever tried to deploy an open-source application (e.g., openerp, orangehrm, prestashop), you know how painful it is to ensure that the right versions of the required software packages (e.g., mysql, postgre, php, tomcat) are installed before the application works. This is because the application software assumes that the packages it depends on are part of the operating system—ensuring those dependencies are present is the responsibility of the person installing/maintaining the application.

Keeping track of and maintaining these dependencies cause a lot of headaches for the development and operations teams. But in the world of the containers, the container image carries all the required packages, along with the application software. Therefore, this self-contained container can be dropped into a bare-bones operating system anywhere, and it simply works. With this in mind, it’s easy to see why developers and DevOps teams are driving the move toward containers.

Add to this, that the vast majority of people I talked to were running containers inside VMs, not on bare-metal servers, which further proves my point that they are not trying to save the resources consumed by the hypervisors. 

Networking for containers is confusing

I attended a few sessions on containers and the recommended networking architectures for distributed applications with containers varied wildly. The session “Amazon EC2 Container Service: Distributed Applications at Scale” by the GM of AWS EC2 recommended putting the containers behind Elastic Load Balancers. But Docker’s session “From Local Docker Development to Production Deployment” listed the disadvantages of using the load balancers and instead recommended linking the containers using Docker ambassador.

Long story short: Networking and service discovery for containers is confusing and still emerging. 

Ambassador is a lightweight container that handles the service discovery and eliminates the need for a load balancer in front of application tiers. For example, in a two-tier application, an ambassador container runs alongside the web container. The web container sends its database requests to the ambassador container, which forwards the requests to a configured database container. As the database layer scales up or down, the configuration of the ambassador can be dynamically updated and the web tier simply keeps working. It’s an approach that eliminates the choke point, the load balancer tier, in the middle but you have to manage a number of ambassador instances instead of managing one load balancer in the middle. So, pick your battle.

Long story short: Networking and service discovery for containers is confusing and still emerging. The container community needs to settle on these architectures with considerations around not just ease of deployment and management, but also around security—which brings me to my next topic.

What about security for the containers?

Security was barely mentioned in any of the sessions on containers. The focus instead was how to build and deploy applications really quickly, using containers. But the reality is, we won’t see the broad adoption of containers in enterprise production environments until security teams know they can effectively secure them.

The last revolution in the data center compute infrastructure happened when enterprises moved from running applications on bare-metal servers to running them on virtual machines. This broke all the traffic visibility and enforcement solutions that were network-based (e.g., on switches and firewalls). The network-based devices no longer had visibility of—or control over—the traffic between two VMs running inside the same physical server because this traffic was switched by a virtual switch inside the hypervisor and it was never seen by the physical firewalls or switches sitting on the physical network. A number of hypervisor-based solutions were invented in the last decade to  solve this problem—providing traffic visibility and enforcement for this VM-to-VM traffic.

Container security

Containers take this to the next level, breaking all the hypervisor-based traffic visibility and enforcement solutions, as the traffic between two containers running inside a VM is switched directly by a bridge inside the VM and is never seen even by the hypervisor. If you were to set a security policy to not allow two containers within a VM to talk to each other, where would you enforce that policy knowing that traffic will not even go to the hypervisor layer? It will have to be done one level deeper—inside the VM itself.

To top that off, no enterprise will move all its infrastructure to containers. Most of them will end up with some hybrid mix of bare-metal servers, VMs, and containers. So, unless an enterprise uses a security solution that provides visibility and enforcement inside the VM—one that will work for all three types of compute infrastructure—it will end up managing three different solutions for these three different types, increasing operational costs and complexity significantly.

It’s clear that as application developers embrace containers, and other technologies that help them move faster, security teams need to be part of the conversation, and empowered with technology that helps them move just as fast.

Topics: Adaptive Security, Illumio News, DevOps

Share this post: