Introducing a blog series by Illumio CTO, PJ Kirner, that will cover key concepts to help you think about data center and cloud security in a new way, and realign your approach with evolving requirements across application environments. Read the first post of this series, "Why We Need a Data Center and Cloud Security Revolution."
I can’t imagine getting to work without a GPS. Yes, I pretty much drive the same route every day and could probably do it blindfolded, but the GPS helps me get there the fastest by recommending the right route because it has up-to-date real-time data about traffic and can guide me around accidents and traffic jams. It knows both ends of my route, where I’m coming from and where I’m headed to and, mixing in historical data, it can predict the best time for me to leave and always be on time.
When trying to segment your applications, you need more than your run-of-the-mill visibility, more than just bar charts and spark lines. You need something that will ensure you have the right insights, context, and perspective to optimize your approach to securing your environment. Just like the GPS, you need a map that pulls it all together: the real-time view, the historical data, the application context. You need all of this to understand your applications and all things around those applications, like the connections, traffic flows, access polices, dependencies, and inputs/outputs.
With this understanding, you can locate and understand risk, model policy, create mitigation strategies, set up compensating controls, and verify that those policies, strategies, and controls are working as you intend to mitigate risk.
In this post, I explore the key characteristics and requirements of tools that provide the application mapping you need to be successful securing today's ever-changing, increasingly complex distributed and connected application environments.
Visibility alone won't do it
Walk the floor of any tech conference and I can guarantee that visibility will be the first word you mark on your buzzword bingo card. It appears at just about every booth. But just because a solution has visibility doesn’t mean that you’re seeing what you need to accomplish your goals. Especially when it comes to security.
Data visualization can come in many forms, like bar charts, line charts, spark lines, chord graphs, sunburst partitions, and even lists. While these outputs may be fine for some use cases, they don’t provide the perspective to help you understand and segment your environment with a goal to improve security.
Visibility for segmentation needs to be built with that goal in mind. It needs to:
Provide perspective of the environment to help you understand the context of your applications' components and the relationships between them.
Provide a view into the current and past state so you can plan and mitigate future risk.
Be a real-time map of your application environment, with insights and data to give you the full picture.
Visibility that provides the wrong perspective – like just a view of the network – is insufficient and ineffective for planning, defining, and enforcing security policy for your applications. It would be like trying to drive from SF to NYC with just a printed relief map...
Know what's happening
There was a time when you could take a snapshot of the data center and know with high confidence that snapshot would be valid for weeks, if not months or even years. Things didn’t change that much. Teams would use tools like Visio to map out environments, and some would even go as far as to print out huge poster-size versions of their data center to hang on the wall and point at during planning discussions. Picture something like this:
Those days are long gone thanks to virtualization, cloud, and containers, which have all made it easier to spin up, tear down, and move workloads – creating highly dynamic environments. Faster than you can click on the Visio icon and open that .vsd, your view of the data center may already be outdated and obsolete, at least in part. This rapid pace of change makes it hard for security teams to understand the current landscape, properly assess risk, plan policy, and enforce controls.
To stay in sync with these highly dynamic environments, you need a map that is real time, constantly watching and adjusting to keep up with changes.
This becomes important to understanding the current state of the environment and understanding evolving risk – both essential inputs into effective mitigation efforts.
Know what happened in the past
Real-time isn’t enough. You also have to be able to look back in time at what happened. The famous quote from George Santayana, "Those who cannot remember the past are condemned to repeat it," should certainly be considered by security teams. Looking back at historical data, essentially considering past experiences, becomes the next key input into security planning to ensure that you’re not only considering what is happening now, but also what has happened in the past – both predictors for what could happen in the future.
How will your security improve with historical data?
Consider dynamic workloads like VMs, cloud servers, and containers that can disappear as quickly as they pop into existence. How do you know what they did? What traffic was sent? How did they impact the environment? What risk was introduced while they existed? Historical data is another key dimension to help you understand how risk might manifest in your environment in the future.
Seeing how application dependencies changed over time can be another important perspective. When did my critical application become so highly connected? What exactly is connecting to it? Are all of those connections really necessary? How concerned should I be? Historical data allows you to look back to see what caused current risks and ask questions about how they came to be.
What if two workloads are currently at the same risk, but when looking at the historical view, one of them is increasing over time and the other is decreasing over that same period? What factors caused the divergence? With historical data, you not only see that risk changed, but how they changed over time.
Don't forget — it's about the applications
The final component reminds us why we’re here. It’s the applications we’re here to secure, so we should be looking at things in the context of the applications – not the network. Application teams don’t want to look at and try to make sense of the network (IP addresses, ports, VLANs). They understand applications, how they work, how they should work and, ultimately, probably have some good perspective on how they should be protected.
Seeing the environment in the context of applications is essential.
An application-centric view helps you to see those applications and their components in the context of the roles they play in the greater environment. This is the only way that you can truly understand risk and create policy to mitigate that risk.
An application dependency map brings it all together
With an application dependency map, you can now see things in the context of your applications and their dependences and relationships, while abstracting away the complexity and details of the environment. No longer do you have to think about the network (IP addresses and ports), platform (bare-metal, VM, container), or infrastructure (private data center or cloud). You can focus on the applications, how they work, how they depend on each other, how they should work together as business process, and how to best protect them.
Seeing an application dependency map gives you a view of how things are connected; now you have context for what is connecting and can ask questions. Why is that web server connecting directly to that database?
You can also see how risk in one application might impact other applications, like single points of failure that might indirectly cause cascading outages in other critical applications. With this view, you can truly understand risk across the environment and think about creating the right policy to enforce how things should work.