"Those who believe quantitative methods are impractical in cybersecurity are not saying so because they know more about cybersecurity but because they know less quantitative methods."
— Douglas Hubbard & Richard Seiersen, How to Measure Anything in Cybersecurity Risk
"My favorite poem is the one that starts 'Thirty days hath September' because it tells you something."
— Groucho Marx
Coming into RSA 2017, there has rarely been a time in the security industry when so much time, effort, and money have been spent on something that is so difficult to quantify. Despite over $75B spent annually on cybersecurity, most Chief Information Security Officers are hard-pressed to offer their management teams and boards of directors a corresponding metric of how security can make the organization safer. This is not surprising, as cybersecurity traditionally has a zero-sum dynamic: the attacker only must be right once and the defender must be perfect.
In the data center, however, things are beginning to shift. While today the vast bulk of network security spend and attention is still focused on the perimeter, the quantification of risk therein follows the same zero-sum dynamic. The interior of the data center (and public cloud), however, offers a different dynamic. The advent of adaptive segmentation technology offers the defender a better way to mitigate and reduce risk by shrinking attack surface, by focusing a smaller locus of critical assets and networks that communicate with these assets. We can begin to measure risk (and the value of corresponding investments) by:
- Classification of high-value and low-value assets (e.g., not every database is the same), and
- Mapping and measuring the application topology and communications of the attack surface.
Here is an example from a small data center:
Attack surface, in this model, represents the number of compute assets (e.g., workload or servers) and the number of open as well as active communications pathways to reach them. In this example, 107 workloads represent a potential of almost 250,000 open pathways (ports), even though just over 6,000 are in use.
This kind of quantification and catalyst for action is even acute in a larger data center. Take a look at this from the perspective of a large enterprise data center:
There is no human way to govern and control the communications within a large data center if it is left wide open behind the interior. A smart segmentation strategy will lock down all the unused communication paths and govern communications among the authorized connections. This will result in over a 97 percent reduction in attack surface and create fewer false positives for security operations staff to chase.
"The secret of getting ahead is getting started. The secret of getting started is breaking your complex overwhelming tasks into small manageable tasks, and starting on the first one."
Smart adaptive segmentation helps refine a large task (protecting an enterprise’s entire computing assets) into smaller, more manageable tasks (building and executing a roadmap of asset protection, from most valuable to least). It starts with providing the visibility required to understand what assets are operating in the data center and how they communicate.
This new foundation for cybersecurity not only provides an opportunity for security and operations teams to make progress against the lateral spread of attacks, but also shifts the strategy in favor of the defender. By locking down all critical computing assets through security segmentation, security teams create a network of smart sensors in their data centers that will spot a bad actor much earlier than an unsegmented data center. As we move into RSA 2017, there has never been a more opportune time to change the situation on the ground.
[Editor's note: Start your RSA Conference groundwork at Booth 2900 in the North Hall!]