In a modern data center or cloud deployment, some servers are more likely to be targeted and face a greater risk of exploitation if compromised. Near the top of this list, core and management services connect most of the compute instances to essential services. These range from Active Directory to network services like DNS to monitoring, patching, and logging systems. Many enterprises rely on 20-50 such services. They are the "glue" that keeps all other systems working smoothly together. From a functional standpoint, they are essential.
From a security perspective, core and management services are also the points of greatest risk. If a particular service connects to every compute instance, the compromise of that central service could be catastrophic. Core and management services are always going to be open to all their constituent machines, so they provide a durable vector that, at some level, is hard to mitigate. But these legitimate connections only form a small part of the risk profile posed by core and management services.
The bigger risk is the possibility of lateral movement between compute instances, endpoints, and other systems. Say every system in the data center has access to a performance monitoring service. Periodically, an agent gathers up metrics and sends them back to a central storage and analysis system. These connections must be allowed for the service to function. But are the various endpoints and compute instances able to talk amongst themselves on that port? In most organizations, the port is designed to talk to one service, but nothing is preventing other machines from attempting to use that port. Even worse, this is exactly how ransomware generally propagates.
Most ransomware co-opts ports that most operating systems have open by default. In this way, once one machine is compromised, the malware can quickly contact other systems on open ports, passing its crippling payload rapidly across a compute environment. Blocking individual machines from communicating with each other eliminates this possibility for spread.
Let’s consider the types of core and management services you need to control.
Three Categories of Ports to Close
- Highly connected ports. Most polling or reporting systems connect to the majority of systems in a given environment. Others like Active Directory services provide broadly used capabilities for system administration and operation. The broad or even total connectivity offered by these services makes them imperative to protect.
- Peer-to-peer ports. Other ports use protocols designed to offer an even broader “any-to-any” capability. A protocol like RDP (Remote Desktop Protocol) intends to let any server communicate with any other server to remotely manipulate its user interface. SMB (Server Message Block) does the same for file transfer. These two protocols are also used in almost every reported ransomware attack. They are the dominant vectors for rapid spread.
- Well-known ports. Ports like common Linux utilities, network utilities like DNS, NTP, open source databases, and others have become ubiquitous in most environments. Often these services have been around for decades, and multiple versions often exist throughout a compute complex. Vulnerabilities against these services are also well-known, making them a common target for compromise or exploitation. A malicious actor knows that the more well-known and popular a service is, the more likely it is available to exploit in any organization.
To secure these ports, you need granular and flexible micro-segmentation capabilities. On the one hand, you want to remove any ability for machines to communicate peer-to-peer. But at the same time, you need the machines to communicate correctly with the core service.
With Zero Trust Segmentation, administrators can first verify the scope and reach of a given core service. This means that the rule can be broad enough, but no more. At the same time, the team can verify that the protocol in question is not currently be used in undesirable ways. It is then easy to specify and enforce the Zero Trust policy one service at a time or in bigger swaths. This takes only minutes per service, and organizations rapidly progress through their foundational services.
Zero Trust Segmentation closes core and management services ports to only their minimally necessary sources and destinations, preventing their exploitation from lateral attacks. The best news is that this protection can be put in place in minutes. Core and management services are well understood. Infrastructure and security teams know how they should work, confirmed quickly by any good application dependency map. The policies to restrict these flows are simple and can be immediately enforced. We recommend that securing these ports occurs almost immediately after installing our software.
Making it difficult or impossible for most ransomware to spread doesn’t take a long time, making it an easy “first win” when embarking on a Zero Trust journey. Why not demonstrate immediate progress on 20-50 of the most critical and connected services? It’s a great way to show the rest of the business that Zero Trust Segmentation works, is easy to use, and makes an immediate difference. Tightening micro-segmentation controls on core and management services strengthens administrative separation and simultaneously reduces ransomware risk, making it an ideal early target for Zero Trust Segmentation.