Secure Beyond Breach
In this chapter:
- How to design for operationalization and plan for sustainment
- Opportunities for automation
- Why you should create a roadmap with your micro-segmentation vendor
“We’re Done Now, Right?”
So now you’re deployed. You have a visible topology of workload and application communication, and may have configured segmented protection based on individual workloads, applications, asset groups, or environments. You’re connected to your enterprise – the agent required is in your “Golden Image” or part of an automated deployment package so that all new workloads are plotted on the map once deployed and automatically labeled and protected with your adaptive platform.
Your SIEM tool is one of the most important tools in your security arsenal because it receives the notifications of policy violations you configured with your micro-segmentation capabilities. You may even have configured the ability to make operational health decisions with the use of vulnerability data overlaid onto your application dependency map. You are now in an advanced state of deployment and have advanced your security posture by massively decreasing your attack surface.
Success, finally! You’re in an operational mode and simply need to sustain for two to five years.
But what does “sustainment” mean for a segmented environment? What are the parts of the process that need to be considered? What processes must be built, what resources do you need on the task, and how does sustainment actually work? The model for sustainment does not begin after a deployment is complete, but is designed from the decision point of implementing micro-segmentation. If you find that you design, deploy, and then create a sustainment model, you’re simply doing it wrong.
No modern machine operates forever without some care and attention or an efficient operational model. It would be a wonderful thing to be able to spend money on a solution or make an investment, put it on autopilot for the next decade, and never tend to it. But that’s not what happens. We tune, we check tires, we reallocate assets. In real terms, we seek the input of numerous parties to keep the machine going, we set up workflows and processes up that are run manually or automatically to assess health and operation, and we enable the business by making things as easy as possible right from the get-go. That preventive approach means designing a healthy and efficient operational model before your deployment and making small but smart investments in time and effort.
How It Works
Although host-based segmentation can be seen as a way of instrumenting a host firewall for every workload owned by the enterprise, the job is not nearly as onerous as a traditional firewall management operation.
Rules are automatically written based on natural language labels, visibility of flows, and already established policies that reside within your segmentation software. The rules at the workload adapt based on the higher-level, label-based policy that was written.
When a new workload appears as a result of the agent first communicating with the software, it is labeled and inherits the policy associated with those labels. Any attempted connections that are not part of a policy will appear within a blocked traffic report and will be reported to the SIEM. This way, if new flows and connections are required, they can be identified easily and allowed with a few clicks.
Sustainment from within the security operations center
Within smaller segmented environments, sustaining a deployment can be as simple as having a few subject matter experts or individuals trained on using the software sitting within a security operations team. Most often, the members of the firewall team who are responsible for writing access list entries on traditional firewalls are the ones who will own the software.
In an automated environment, sustaining a deployment would be as simple as monitoring the SIEM for new blocked traffic and validating it, or responding to the needs of the business by enabling policy when it doesn’t exist. No new headcount or hiring of team members would be expected, as the segmented environment is simply managed by another tool used by the team. Remember, it is a management rather than a monitoring interface. Those responsible for managing the tool need only interact with it when changes are to be made that are not instrumented via automation. The monitoring is done using existing tools, and dealt with in the same way as a traditional firewall deployment.
In larger deployments, the load is a little heavier but spread across security and operational teams. The rate of change and organizational complexity will dictate needs, but we can expect a full-time headcount to be assigned as an administrator per 10,000 to 20,000 workloads managed.
Automated reporting becomes extremely important because the number of flows quickly becomes unmanageable for manual assessment and intervention. Application owners are sometimes given the ability to manage and maintain their own policy definition in larger implementations, further spreading the load and ensuring security design is built into deployment rather than externally mandated and overlaid. Who better to decide which flows are relevant than those who own the application in question?
To read the rest of this chapter, download the PDF below.
Check out the final chapter of Secure Beyond Breach: Conclusion.