There’s only one certainty when it comes to ransomware: it can hit any organization, large or small, security-savvy or not. High-profile attacks are on the rise, fueled by turnkey software for launching them, anonymized crypto payments, an increasingly digitized infrastructure, and the increase in remote and hybrid work environments.
Fortunately, a powerful strategy exists to keep ransomware and other malware from spreading through a network: segmentation. With Zero Trust segmentation, bad things may get in, but they can’t spread, they can’t do damage, and they can’t put organizations under attack on the news.
As with any effective security strategy, segmentation begins with visibility. Specifically, visibility based on an assessment of risk.
Here’s how risk-based visibility works to help systems administrators map communications between applications, assess vulnerabilities, and determine how those vulnerabilities could lead to exposure across the entire environment.
The Problem With Open Environments
While security is a top priority for data centers, many don’t pay sufficient attention to walls within subnets, VLANs and network zones. In other words, even though safeguards protect against breaches, there’s often not a lot of segmentation to contain incursions that do happen. That’s partly by design; environments help many business systems connect easily with each other to exchange data and run day-to-day operations — until they don’t.
The problem with these large, open environments is that if malware infects one machine or zone, it can spread that much more quickly to the entire environment in a matter of seconds.
One common attack vector is through an unsuspecting authorized user. In this case, an employee on a laptop at home clicks on a suspicious link. The link launches malware silently in the background that any detection tools in place may not catch. From there, it tries to spread to other assets.
But if there’s no lateral movement possible through the network, malware simply can’t spread. And this buys valuable time for detection or other software to work. The user or a security professional also gets more time to notice something awry with the infected machine and take action before any chance of damage to any other assets or data.
In short, if you buy yourself that critical time window, it can make all the difference for containing ransomware and other malware attacks. You can isolate attacks to one machine to be cleaned instead of later trying to manage dozens, hundreds, or even thousands of compromised machines and the hit to operations or reputation. It all starts with risk-based visibility.
The Components of Risk-Based Visibility
Risk-based visibility means identifying which systems and applications are vulnerable due to excessive, unnecessary communication, or even non-compliant data flows.
That’s why ransomware protection from Illumio starts by creating application dependency maps. These maps let system administrators see not just a jumble of IP addresses but instead a top-level view of application topology. That means everything appears neatly organized for easy visibility, along with clearly-identified relationships showing how applications communicate with other applications and across networks.
From a fine-grained to a high-level view, application dependency maps let administrators examine an entire environment from top to bottom. That includes how individual protocols work across the production environment or how a given set of data flows operate between development and production environments.
Illumio augments this visibility with vulnerability data. By integrating vulnerability and threat feed data with real-time traffic flows, you get a quantitative risk score for an application or each application workload. The score makes it easy to understand which applications are connecting to vulnerable ports and how much overall risk vulnerabilities are generating. This context is invaluable to reducing risk in your environment. Patch based on criticality or implement segmentation policies as a compensating control.
Getting the right views to the right people is also critical. That’s because effective risk-based visibility depends on personnel getting the information they need to answer the security and compliance questions relevant to them. That’s made possible by a single source of truth — the map.
Real-time access to the right visualizations lowers operational risk because everybody can agree on precisely what is true. Does someone from the application team need to see the big picture of application topology and data flows? Now they can. Does the network security team need compliance data? They can look at their own views of the same data. Network operations and DevOps team members can also see the information they need in the same picture. And everyone will agree that what they see is indeed how any given application works.
Such a single source of truth improves collaboration. And it spares people the time and tedium inherent in research projects on software that may have been installed half a decade ago so that they can focus on other, higher-value priorities.
All of which goes a long way towards improving security, laying the groundwork for containing ransomware and other malware through segmentation.
Visibility for Compliance
The benefits of risk-based visibility aren’t limited to containing malware. It can also help validate compliance boundaries by enabling teams to identify any non-compliant data flows.
For example, comprehensive visibility can reveal an application collecting data from other applications in contradiction with regulatory frameworks such as the Payment Card Industry Data Security Standard (PCI-DSS), SWIFT Customer Security Controls Framework, or the Health Insurance Portability and Accountability Act (HIPAA). That’s critical for any organization operating in regulated industries, where it matters very much what is in and what is out of scope for data collection and processing.
Beyond regulatory compliance, most organizations have policies around remote access. For example, since most employees in an organization don’t need all administrative access to data centers, they get restricted access. And in cases where they do need full access, they’re often required to use a jump host to control communication between remote and data center servers. But jump hosts may slow down impatient users, presenting an incentive to bypass them.
With visualization, you can answer critical questions about jump servers, such as: Are people actually using them? Or do you have administrators who believe they’re senior enough to simply bypass that slightly slower jump host and connect to apps directly?
If you can see that activity, you know what to do to tighten up those security risks.
Getting this kind of critical visibility has often required resorting to a painstaking examination of voluminous data flow tables to try to determine whether they are in or out of certain boundaries or network ranges. This time-consuming and challenging task has put comprehensive, let alone real-time, visibility out of reach of many organizations. In contrast, visualizations that clearly show borders around compliance areas give you a clear picture of data flows going in and out of endpoints, data centers and servers.
Such visualizations provide significant time savings for internal and external stakeholders and set the stage for simple conversations between managers and auditors about compliance boundaries and controls.
Visibility, the First Step to Security
Visibility is the first step in controlling access to data center and cloud assets, giving you the insights you need to tighten controls between users, applications and servers wherever they reside.
The reality is that ransomware is a problem for everyone. But with the help of risk-based visibility, you can start to implement preventative and responsive strategies to contain it where it can do minimal damage.
To learn more: