One of the good things about being in the industry for many years is the fact that we can observe how network security trends in the data center have evolved and somewhat predict what is coming next, based on common patterns and intuition.
Network and security boundaries are changing
Fifteen years ago, network security was pretty simple in the data center. Layer 2 protocols were absolute kings in their realm and the firewall could sit at the edge to protect the Internet or a WAN connection. Servers for a given application were all connected in the same rack, and the boundary between the different pieces of the infrastructure was well-defined. Segmenting this was pretty straightforward.
A few years later, in order to consolidate all these servers and simplify connectivity, chassis and blade servers slowly gained traction, creating the first shift in network and security boundaries between people managing servers and people in charge of the network and security. Who’s responsible for the connectivity modules in these chassis, and where should we insert the security appliances? Most of the time, the network module ended up being the least important module in the chassis and always a nightmare to connect to the rest of the network and security infrastructure.
But this battle was upstaged by a new technology. VMware’s ESX hypervisor quickly democratized the ability to share the same hardware server to run many virtual servers. And as a result, to connect these virtual servers together, the network had to shift once again in a different place: within the hypervisor. Shifts began as a very simple virtual switch, but quickly expanded to layer 3 services and, eventually, security.
And again, despite the evolution of the data center infrastructure, the public cloud started its ascension to offer a range of services to the enterprise market, fully automated and extremely agile. It did not take long for developers to understand the value of this new abstracted infrastructure that can deliver a scalable and highly available service without dealing with the complexity of managing infrastructure.
A few years ago, a new type of workload emerged, lightweight, portable, and easy to spin up or tear down in seconds: containers. With the proliferation of containers, developers quickly realized that there is a need to orchestrate these compute resources, along with the network, to make sure applications can scale up and down without having to depend on an external network and security infrastructure. A container cluster is a new infrastructure element that mixes up compute, network and security and creates, once again, another shift in the network security boundary.
So, what did we learn over 15 years? What is the common pattern of all these evolutions?
The network security boundary is shifting more and more into the compute layer because developers are always pushing the limit to get more flexibility while developing and testing their applications.
Network and security teams are late to the game and, at best, can recommend choices or solutions, but most of the time, inherit what’s been decided by applications or cloud teams.
Securing infrastructures is harder when things have not been thought out and designed with security in mind, and usually creates an extra level of complexity when it's added later on.
What's the problem with containers?
Containers and container clusters are actually not an exception to this trend of moving the network more and more into the software and compute layers. As described earlier, we’ve seen this for many years, and there is no reason why it would change if the network and security teams are not reversing this trend.
From a network and security perspective, containers do not introduce anything new or unknown, they just combine what we already know (IPs, subnets, DHCP/DNS, zones, segments, encapsulation, NAT, firewall or load balancers), but everything happens to be in the OS itself, and that is one fundamental problem.
IT teams love boundaries, responsibilities, and ownership, and that’s the opposite of how container clusters operate. They have been designed to be self-sufficient, orchestrated and opaque from the outside world. On one hand, it is great news that a new piece of infrastructure does not require extensive design sessions to be connected and running. On the other hand, it creates a real security question as to how the application flows can be secured if you don’t know and understand what’s happening within these clusters.
What can be done to change this?
Ideally, developers should be developing code and handing it over to another team to push the code into production – in a thoroughly tested and automated way, on an infrastructure designed for scale and availability, and with security at the top of the priorities at every layer of the stack.
Well, it appears that in many organizations, we haven’t reached that point yet. DevOps teams are connected to their peers in development, but this is not always the case for network and security teams, and that needs to change if we want to see containers as a disruptive technology in the market.
Network and security teams should spend more time understanding what’s been transported and secured by the infrastructure. They should learn what a CI/CD pipeline is, and they should have an opinion on how things are built within the application so that they can adapt the security mechanisms to complement what the application is not able to achieve. This requires learning new skills, accepting differences, and being critical but open-minded to new concepts that at first may not seem to be a great idea but can actually be very efficient.
Containers are a perfect example of a technology that forces people from every area in an IT department to learn from one another.
Otherwise, it is a recipe for disaster. There is no container cluster without networking, there is no containerized application in production without security, and there is no shared infrastructure without segmentation. Network and security teams need to use this opportunity to learn new ways of doing things, spend more time understanding how things can be done in software, and take ownership of the network and security layers by proposing simple, secure, and stable designs to serve the application layer.
Where should you start?
There is obviously no magic bullet or secret weapon that could be the one-size-fits-all answer. But here are some ideas that can help drive your team to success:
Get to know your frenemies: Developers and DevOps teams are not the enemies of the network and security teams. They are all serving the same purpose: the business. But without knowing what other teams do, it is harder to see what can be done to be better as a group. Building complex infrastructures like container clusters requires intertwined decisions to be successful, especially when it comes to security.
Acquire the knowledge: Nobody knows everything, but everybody can learn anything. It is ok to be light in some areas of your infrastructure, but it is definitely not okay to be unwilling to learn how things are done or should be done. Containers, orchestration platforms, and service meshes are not easy to approach. It takes time to feel comfortable with new terminology or concepts, but it is so rewarding when you cross that threshold of understanding and can turn that knowledge into action.
A network is a network and security is universal: Keep in mind that a container cluster is a collection of IP addresses (associated with containers) that communicates between one another. Also, applications are not meant to live in a container cluster without being exposed to the world, so there will be a concept of opening some doors to the world. Network and security engineers are responsible for the flows going from one end of a cluster to the other, as well as getting packets in and out of this cluster. If something is compromised within a container cluster, it is the responsibility of the network security team to monitor, react, and respond to avoid the spread of a breach. Yes, container clusters have a different approach for network and security, but it is still a network that needs to be segmented and secured.
Seek the truth: It is important to understand and challenge the status quo. When things do not work as you thought they would, it’s okay. This means that collectively, as a team, you need to seek out the truth and agree upon it. A well-understood technology is easier to deploy, secure, and troubleshoot.
Containers, orchestration platforms, and service meshes are gaining a lot of traction in IT organizations nowadays and it is extremely important that, as a network and security engineer, you understand the concepts of these technologies. Some concepts will sound very familiar, some others will sound very odd, but in order to properly secure things, you must know how they actually work!
Defining Metrics to Successfully Manage Your Zero Trust Implementation Plan
The Zero Trust mindset assumes that one’s perimeter defenses have been breached, and priorities pivot to containing the lateral movement of malicious actors. Illumio published the 3-stage Zero Trust Plan, which individuals use to plan and operationalize their Zero Trust journey.