/
Zero Trust Segmentation

An Architect's Guide to Deploying Microsegmentation: Managing the Vendor Relationship and Operational Integration

Transitioning from traditional network segmentation (such as firewalls) to microsegmentation requires an orchestrated effort led by architects or project managers. By understanding and exploring the true benefits in advance and the clearest path to optimization, success throughout the deployment process is achievable.

This series unpacked the many considerations. In this fifth and final part, I’ll discuss how best to manage your vendor relationship and maintain operational integration.

Managing the vendor relationship

Your chosen vendor wants to see your microsegmentation project succeed just as much as you do. On the vendor side, we regularly communicate internally about each deployment to make sure that features, resources, and code are all available when they are needed.

Treat your vendor like a strategic partner, to get our best performance. When we know not only “what you need”, but “why you need it” and “why you need it by when”, it makes it so much easier to move our extended team. If you hold your vendor at arms-length, and we can only see the very next step in the project plan, we are often unable to see the big picture and bring our expertise and lessons-learned until it is too late. Any project may be delayed, required to finish early, or any number of other outcomes. Communicate big changes early, and your vendor will be in the best position to absorb the changes and help you adjust the plan and execution.

Check list

There are several key partnerships that need to form in the early weeks of the project:

  1. Solutions architect, professional services engineer, project manager, tech lead. This is the core technical working team. They will together do most of the technical work to see the project succeed. It is important that there is a free, open, and respectful dialog.
  2. Customer success architect, director, project architect. This is the strategic working team. They need to know what is happening technically and be looking ahead of the project team to remove or minimize obstacles. This relationship needs to be comfortable enough that both sides can speak transparently about problems and challenges. This is the first “escalation” point for both sides if something is not going well.
  3. Account manager, vendor VPs and executive sponsor. This is the business-level working team that is responsible for results. This team handles any escalations between companies that may arise. Each side will have execution risk that should be understood and expressed at this level. This team should discuss more than the project at hand to include roadmap and additional opportunities and leverage points for microsegmentation.

The executive sponsor that ensures each of these teams functions well will rarely be surprised on the downside and will find that most of the inevitable issues are handled without coming to executive attention except as status reports. No project is “self-managing” but when these three levels of relationship are well-tended, projects tend to run smoothly.

Managing operational integration

Now, let’s switch gears. Most microsegmentation solutions have a few components: a central policy engine and a host-based agent at a minimum. Complexity attached to a microsegmentation deployment comes from the fact that these two components touch so many other things in the enterprise environment. There are several “best practices” that will assist in operational integration with existing systems.

Build a QA or pre-production test environment

While both the internal and vendor teams will naturally focus on the PROD instances of the solution, ensure that the team sets up a small QA version of the microsegmentation solution in a non-production environment. This platform will serve several purposes. Early on, it will be a place that internal developers and automation tooling teams can test and develop code. Operations teams can test logging integrations and event handling. Internal training classes can use the system for familiarization training.

After the deployment is complete, this capability should be retained. Ensure that this pre-prod system manages one of each of your main OS images. In this way, new vendor code can be tested in the non-prod environment against the full set of PROD operating system images before rolling new releases into production. Ideally, your vendor deployment team can stand this system up as a single lightweight VM.

Set up and test logging/event alerting before production deployment

Unsurprisingly, OPS teams have the highest confidence when complete operational integration is complete before pairing production workloads. It takes time and effort to stream logs, parse them, raise alerts and build dashboards.

This work provides full visibility, however into the health of the policy engine, the agents and the underlying systems. It is much easier for everyone to work in sensitive production environments knowing that all the necessary instrumentation is in place. Expect your vendor’s professional service engineers to bring recommendations on key log messages and to recommend alerts that have been popular with other customers.

Three different viewpoints need to be captured in the log analysis/event handing mechanism:

  1. Security. The security team will be most focused on the firewall logs and the anti-tampering mechanisms of the agent. They are always interested in policy and policy violations.
  2. OPS. The OPS team will be most focused on workload and policy engine health, and will want to know how to correlate system events with other data center events
  3. Dashboard. Management or NOC administrators will often need a consolidated view of the microsegmentation deployment that contains highlights and the ability to drill down

When each of these concerns are reflected in the log/event/alert handling mechanism, confidence builds across the organization as many diverse teams realize that the project provides full integration that follows existing practice.

Invest in automated workflows

A micro-segmentation deployment will provide many opportunities to automate security processes that have long been a purely manual effort. In addition, the micro-segmentation labeling will review, and enhance investigating existing metadata sources, and combine them in novel ways. The resulting metadata is itself valuable and can be preserved for use by other systems and automation tasks. It is common for enterprises to have better metadata after a successful microsegmentation deployment if a modest effort is made. This effort pays huge dividends in ongoing operation and expansion of the initial micro-segmentation deployment.

Agent installation

Deploying a microsegmentation agent onto hundreds or thousands of systems will involve some form of automation. In some cases this will be existing tooling, in others it will be built from scratch. But in many cases, the desire will be to integrate agent installation with automated build processes. Whether this is Chef, Puppet, Ansible, Salt, or other frameworks, there is an opportunity to build security into the standard automate lifecycle of the enterprise.

Most enterprise data centers have a mix of full automation using orchestration frameworks, and legacy environments without these tools. Taking the time to work through integration with the orchestration team where possible sets the project up for best success. Older environments that will not be getting the orchestration framework can be handled separately with custom scripting.

Policy engine installation

Some of our customers also package the creation of the policy engine into their orchestration package. If policy instantiation has been automated, recovery from a server crash can happen almost as quickly as the automation can build a new policy engine. Organizations with a strong DEV-OPS motion will want to consider this.

Policy engine database backup

All microsegmentation policy engines have some kind of database behind them. If this database is corrupted or unavailable, it is likely that the solution will not work at all or offer undesirable results. Ensure that the OPS team has the necessary backups automated and is skilled in restoration and recovery according to your vendor’s procedures.

Label assignment

The initial assignment of labels to workloads is commonly done through some kind of bulk upload into the policy engine. This will produce correct labels for the initial systems, in an initial state. Over time, labels will change. New systems will be added, some will go away. The more this workflow is automated, the easier it will be for all involved. This will involve codifying the label assignment in internal design documentation and deciding how to store, update, and retrieve it.

Your microsegmentation solution will always use labels, but these labels may be best maintained through centralized metadata management. Your DEV OPS team will likely have a strong opinion about metadata management, and it is wise to include their voice.

Metadata management

A microsegmentation deployment builds security policy according to metadata assignments. This means that over time, your micro-segmentation solution will have a set of labels and other metadata that describes how things should interact. These labels usually aren’t custom made by your vendor — they are re-used from an existing source of truth.

This provides an automation opportunity. A good microsegmentation solution will always recompute policy when labels change. So, if the metadata is maintained outside your microsegmentation solution, this separation of duties can be leveraged for automation. When the microsegmentation solution references an external “source of truth”, any metadata changes could notify your policy engine programmatically, and the rules would automatically update.

With microsegmentation, getting smarter about metadata management is the same as getting smarter about policy and policy management. Time spent thinking about where the metadata used to make labels is stored, and how it is updated, retrieved and fed to a policy engine is always a fruitful exercise. In other cases, information from the policy engine may be useful in updating existing CMDB systems. Microsegmentation deployment will provide an excellent reason to consider how metadata is used and leveraged in the organization and it may provide impetus to automate those improvements.

Bringing it all home

A successful microsegmentation deployment will improve the internal segmentation model, policy dialog, and level of security automation. Leading the team to that destination will involve new learnings and new opportunities. Microsegmentation will alter portions of the existing operating model and will be best served by a cross-functional deployment team.

As a leader, your input will be needed at several key junctures. By insisting on the having the right conversations around metadata and policy development, you have the opportunity to make a lasting difference in the speed, and agility of the business. You really can have fine-grained control and fast automation at the same time. I hope to hear about your success in deploying, operationalizing and running your own microsegmentation deployment.

For a more in-depth read on everything you need to know to implement a microsegmentation strategy from start to finish, be sure to check out the eBook, Secure Beyond Breach: A Practical Guide to Building a Defense-in-Depth Cybersecurity Strategy Through MicroSegmentation.

Related topics

No items found.

Related articles

An Architect's Guide to Deploying Microsegmentation: Five Places to “Lean In”
Zero Trust Segmentation

An Architect's Guide to Deploying Microsegmentation: Five Places to “Lean In”

At Illumio, we’ve seen that some of the most successful micro-segmentation deployments result from having a clear picture of the design considerations, the process, and the team required in advance

Forbes Recognizes Illumio as Top Cloud Company
Zero Trust Segmentation

Forbes Recognizes Illumio as Top Cloud Company

Illumio recognized on the Forbes Cloud 100 List, for the fourth time in five years. Here's why.

The Great Illumio vs. Firewall Segmentation Showdown!
Zero Trust Segmentation

The Great Illumio vs. Firewall Segmentation Showdown!

Firewalls for segmentation vs. a host-based micro-segmentation solution. See for yourself just how much time and effort they each need to get the job done. 

No items found.

Assume Breach.
Minimize Impact.
Increase Resilience.

Ready to learn more about Zero Trust Segmentation?