Secure Beyond Breach
Chapter 3


The Green Pill of Metadata

In this chapter:

  • Why good metadata is key to understanding and protecting your environment
  • The three essential criteria to get high-quality metadata
  • How micro-segmentation helps you meet each

Micro-segmentation is all about preventing lateral movement throughout your data center and cloud environment. Ultimately, that’s how you protect your endpoints – by controlling the environment so that breaches do not spread to other users.

Consider the metaphor of a submarine. If your perimeter firewall is the pressure hull and your internal network firewalls are the bulkheads, micro-segmentation lets you put a watertight seal around every single person, compartment, and object on your vessel.



Micro-segmentation gives you the power to apply tailored security policies to every server in your data center: your ordering servers can connect to your processing servers, but your payroll servers shouldn’t talk to either of them. To craft and enforce a policy like this, you need to know which servers belong to which of those applications. This brings us into the world of metadata.

What Is Metadata?

Metadata is the information about your servers that you use to make security (and other important) decisions. In a typical enterprise, metadata might include things like: what application is running on each server, what role or function the application performs, where the application is located, and whether the application is used for development or production.

The metadata about your workloads might be stored in a configuration management database (CMDB), a repository built for this purpose. Or it might be in a spreadsheet. Maybe the metadata isn’t written down anywhere, but your servers follow a naming convention that helps identify it. In a small organization, you might even know all the metadata by heart.

The choice of storage depends entirely on the organization’s size, budget, and capabilities. Large organizations need a CMDB product of some kind but it is a significant effort on which they may spend millions of dollars. The CMDB is not a prerequisite for every organization. If you are a smaller organization with just a few hundred workloads, keeping your catalog in an Excel spreadsheet can work as long as the catalog is well maintained. No matter where you keep it, storing and maintaining up-to-date metadata is key to understanding and protecting your environment.

 

Uh oh. That could be a problem.

“Well, I guess that’s the end of that! If detailed metadata is needed for micro-segmentation, then I should probably quit now.”

If that was your first reaction, you’re not alone. If you took a poll of IT managers and asked how many could tell you exactly what every single workload does, you’d get a lot of blank stares.

Even among enterprises with actively managed CMDBs, the metadata is rarely complete or correct; somewhere between 50 and 80 percent is typical. Maybe you’ve promoted a server from development to production and have forgotten to update the catalog. Or an application owner decided to change what runs on a workload and didn’t tell anyone. Chasing down incorrect metadata is the bane of every IT operations team.

Why Is It So Hard to Get the Metadata Right?

A better question might be: why would you expect it to be right?

Change happens. The MAC process (Move, Add, Change) is fundamental to every IT organization. With a lot of stakeholders and many moving pieces, steps are often missed. But the biggest reason metadata is so often wrong is a simple one: most organizations have no reason for metadata to be correct.

What happens if a server is misclassified? Under normal circumstances, maybe nothing happens. In the event of an outage, you might spend some time on the wrong path because your understanding of the impact is incorrect; this type of detour is generally written off as overhead cost. Nobody ever got fired for forgetting to update the CMDB.

To get high-quality metadata, you need to meet three essential criteria:

  • Incentive: There needs to be strong motivation to keep your metadata up to date.
  • Consequence: Something bad needs to happen if your metadata is incorrect.
  • Process: The steps for populating and maintaining your metadata need to be ingrained into every one of your MAC workflows.

Let’s talk about how micro-segmentation can help with all three of these criteria.

CRITERIA 1: Incentive

Consider the question posed earlier: why would you expect your metadata to be correct? Every piece of metadata starts with a person. It can be an application owner, a service manager, or someone who unboxes servers and puts them in racks. The information about your workloads needs to get from that person’s head into your catalog.

What incentive do the people in your organization have for getting that information where it needs to go? What would make an application owner want to update the CMDB?

The first step toward microsegmentation is understanding your environment. You can’t begin to talk about security policies until you know what your workloads are doing.

An entire chapter of this book (chapter 5) is dedicated to the process called application dependency mapping, which helps you learn enough about your workloads to participate in the micro-segmentation process.

Having good metadata will give you helpful insights into how your application works, and you will probably identify connections that you didn’t even know existed. Do you have an old process that you thought was decommissioned but is still running somewhere? Are you making accidental cross-connections between your development and production environments? How about forgotten legacy applications? These are all common sources of risk, but they cannot hide from your metadata.

There are many other benefits to be gained from having highquality metadata, extending far beyond micro-segmentation . We’ll come back to that later.

CRITERIA 2: Consequence

The benefits of segmentation serve as a carrot for organizations to get their metadata in order; now it’s time for the stick. To really get your metadata in shape, there needs to be a penalty for getting it wrong. Sticks are important for driving organizations to invest appropriately and get the process right.

In most organizations, a penalty is already in place, but it’s levied on the wrong party. Operations teams may struggle to respond to outages or compliance teams may have a hard time meeting their reporting obligations, all because nobody is quite sure what each workload is doing.

There needs to be a clear correlation between actions and penalties, and they need to be aligned to the appropriate teams.

For example, penalties are rarely felt by the application or server owners, who are the only people empowered to clean up the metadata. Therefore, imposing penalties on application and server owners may be a solution to consider.

Remember that micro-segmentation is for security above all else. The goal of your segmentation project is to reduce risk by preventing unauthorized connections. Before you can claim victory, you need to enforce restrictions that stop those connections from happening in the first place.

Micro-segmentation is data-driven at its core. In a successful segmentation project, your security policy is based on your metadata. But to be successful in the long term, your microsegmentation program must also be adaptive (i.e., able to respond to changes in your environment). Writing a bunch of static rules isn’t going to cut it.

Have you spotted the consequence yet? If your allowed connectivity is based on your metadata, and your metadata is wrong, then your application won’t be able to make the connections it needs to function. Incorrect metadata leads to a non-working application. To make sure your metadata is always correct, use it in your security policy to ensure that your systems can’t function if something is wrong with it.

 

 

Ebook

Secure Beyond Breach

Building a Defense-in-Depth Cybersecurity Strategy through Security segmentation

secure-beyond-breach@1,25x-tiny

Swag Request

Illumio Free Trial