In part 1 of this blog series, we looked at how discovery methods can be used in an initial compromise. The second showed an example of identity theft using pass-the-hash techniques combined with remote access tools for lateral movement. In this final part, we look at how to mitigate against lateral movement. We have already discussed the practical example of the two complementary approaches that enable lateral movement: application-level and network-level.
It should be clear that by network-level we actually mean host-to-host communication over the network, not necessarily network devices like switches or routers. A host can be a workload like a domain controller machine, database server machine being either physical, virtual or even containerised. The application-level also refers to what happens inside the host itself. For example, binaries on disk, processes in memory, registry actions, etc.
During the lateral movement discussed in the previous blog, the Mimikatz-enabled pass-the-hash technique was used inside the operating system at the application level by retrieving hashed credentials from the Windows LSASS process memory.
The elevated access token derived from that attack was then used to enable remote access using the PAexec tool leveraging Windows SCM.
Analysing this specific attack sequence against the two levels described above, the application-level of the system would have to prevent the use of Mimikatz first or failing that PAexec, based on default-deny using an allowlist of applications. Alternatively, we would have to detect the process launch in memory by, for example, monitoring loaded DLLs or API calls. The network-level would have to enforce micro-segmentation at the host level to prevent movement between systems even if they may be on the same subnet or VLAN. Traffic baselining will also make it possible to detect any anomalies like data exfiltration.
The image below shows static analysis of both Mimikatz and PAExec binaries and some of the system dependencies like DLLs that are imported.
The running process in memory shows us the process tree of cmd which was used for both Mimikatz pass-the-hash and subsequent PAExec connection to the domain controller.
Forensics on the destination system, in this case the domain controller, will also show the binary used on the domain controller to facilitate remote management and, in this case, lateral movement.
And the image below also shows the associated service.
By default, it uses the standard naming convention for both the binary and resulting process names. This can of course be changed by a threat actor.
Therefore, it is important that mitigation approaches take these levels of attack into consideration – protections that look at the application-level threats and protections that look at network-level security with a focus on the host-to-host communication. This will then mean that the security stays and moves with the host or workload being protected (e.g., the domain controller machine as a workload and the endpoint machines that access the domain controller).
Applying this two-level concept to the domain and associated servers and clients, the infographic below shows some of the important security considerations to protect against threats to the domain controllers and other domain systems as discussed in this blog series.
A good starting point of information is Microsoft's Best Practices for Securing Active Directory, which details common-sense approaches, like no logging on to unsecured computers with privileged accounts or browsing the Internet normally with a highly privileged account or even directly from domain controllers. Effective privilege management and application allow lists can also automate the restriction of privileged account usage across the domain and prevent the use of unsanctioned applications.
As an example, at the application-level of the system protection, Endpoint Detection and Response (EDR), combined with Identity & Privilege Management solutions based on Zero Trust, can help deal with application-level threats on the domain system, like credential theft and LSASS memory manipulation, using tools like Mimikatz or Rubeus (even if they are run in memory only and do not touch disk).
The example below shows such an example from an EDR solution, CrowdStrike Falcon, detecting a series of malicious application- and system-level behavior.
At the network-level of the system, like with the domain controller and other domain systems, host-based micro-segmentation solutions like Illumio Core can provide Zero Trust security and micro-segmentation. Illumio Edge extends this protection to the endpoints in and outside the domain. This is especially true for cases of zero-day vulnerabilities and in cases where threats are missed by application- and system-level endpoint security.
Most modern networks are heterogeneous, complex, and extended, especially in this era of remote working. As a result of this, it is not particularly easy to ensure security without first having a focused strategy. In the case of large networks especially, it may seem almost impossible to achieve effective security due to the sheer number of disparate, complex systems with varied security policies. Therefore, it is important to go back to basics to:
- Know what you have
- Know what they do
- Secure them
This is especially true where workloads are stored in the data center or cloud. The easiest and most effective way to know what you have is to first group the systems by specific attributes like their location, environment, and application. This will make it easy to identify critical systems and application groups, core services used across groups, and other less crucial systems and applications. Naturally, the primary focus will then be on the most critical assets, crown jewel applications, and core services.
Effective security does not exist in isolation, so any approach must take these key considerations into account:
- Performance at Scale
The first important point is visibility. As shown in the example below from Illumio's Core solution for Zero Trust workload protection, application dependency mapping and showing different application groups and their connections paves the way for informed policy definition and provisioning directly on host systems like domain controllers, database servers, and other critical workload systems in the domain – physical, virtual, containers, or cloud. This means that micro-segmentation can be applied even in a flat brownfield environment with systems that span different geographic locations and exist on different platforms. This helps in knowing what your systems do on the network. The example below shows the application groups and their traffic relationships in an application dependency map.
Such useful system-level information can also be integrated into existing security investments like SIEMs, vulnerability scanners, or CMDBs. In the example below, information from the host-based micro-segmentation solution is fed into a SIEM or security analytics solution. This example shows the integration with Splunk SIEM:
And in this second example, with QRadar:
This means that new solutions can be combined with existing security investments to protect the overall domain systems. Performance and effectiveness of security solutions at scale should also be an important consideration so that security can be scaled up or down, both in fixed and agile environments, such as containers or cloud migrations.
Once these are all in place, it is easy to define consistent security policies across all systems to monitor, detect and prevent anomalous behavior. The image below shows the different type of micro-segmentation policies that can be applied based on real-time traffic patterns host-to-host which is the network-level.
In newer versions of Windows, such as Windows 10 and Server 2016, auditing for event 4768 - a Kerberos authentication ticket (TGT) was requested and event 4769 - Kerberos service ticket was requested, and subsequent correlation can point to the beginning of golden or silver ticket attacks. Microsoft has also implemented new protections like Credential Guard, which aims to protect against credential dumping. In the event that a cyber incident occurs, it is important that a cyber incident response strategy is already in place.
Both the application-level and network-level protections should be underpinned by an ‘assume breach’ strategy, so that, overall, active threat hunting supported by analytics, continuous monitoring, and detection is always in force in an automated and structured manner.