/
Zero Trust Segmentation

Codecov Takeaways — What We Know So Far

Not that anyone needed reminding so soon, but the recently announced breach of Codecov’s toolset, which supports Continuous Integration (CI) processes across many organisations, again highlights the vast density of interdependencies that exist today and the resultant exposure to supply chain attacks.

Further, the often implicit faith in software provided by trusted suppliers means that customers do not adequately test what an update is doing, both in terms of what it is doing locally on the host but also in terms of network connectivity and data transfer. All this means is that, more often than not, the software that is running in organisations is not fully understood.

How is this relevant to what happened with Codecov?

Codecov provides reporting capabilities that can be integrated into customers’ CI pipelines — specifically, their tools report on the amount of code that is executed as part of testing, helping identify gaps in the validation process. These metrics are then uploaded to their SaaS platform for reporting and analytics.

The data upload is facilitated by scripts known collectively as “Bash Uploaders,” which are executed as part of the CI process. These Uploader scripts, should they be tampered with, provide an opportunity for an attacker to not just redirect the upload to a server of their choice, but to also specify what data they want to be included, making anything available to the CI process potentially accessible.

In the case of Codecov, the initial compromise was through a leaking of Google Cloud Storage credentials made available via an error in Codecov’s docker image creation process. These credentials allowed the attacker to post a modified version of the Bash Uploader script, which they altered to allow them to harvest the contents of all environment variables available to the CI process in which it was executed and upload these to their own server.

Given that this modified script was available directly from Codecov’s site, it was presumably trusted by customers and downloaded and integrated with little validation. Environment variables in the CI process are typically used to insert relevant secrets into code to access the resources it needs at run time. The malicious Uploader script would thus have access to these and be able to transfer them to the attacker, providing them with a healthy harvest of credentials and other sensitive information.

Even without further persistent network access – and there is no indication yet as to whether the breach establishes this – these secrets are a treasure chest, given that they provide access to a number of internet-accessible resources, such as storage buckets, databases, and other cloud services that the associated applications used. These, too, are now likely compromised.

At a bare minimum, any organisation that believes they may have incorporated the compromised Bash Uploader scripts into their CI pipeline should, as a matter of urgency, go and reset all the secrets that this has access to. This will almost definitely require a redeployment of affected applications to ensure that they have the received updated secrets, but the overhead of this method is preferred to prolonging the exposure of stolen credentials.

Beyond the exfiltration of secrets, the persistent privileged access afforded to the CI pipeline is ripe for abuse. By definition, the pipeline’s responsibility is to generate new software builds and publish these to repositories. Compromise of the pipeline infrastructure itself affords the attacker the ability to alter the contents, potentially inserting backdoors for later use either on downstream systems or within generated artifacts. So how much rebuilding needs to be done in order to be confident that all traces of the attacker and their tentacles are removed?

Next, it’s worth looking at the network access available to the CI infrastructure. Typically, these have direct access to other key components, including but not limited to Version Control System, Monitoring Infrastructure, Automated Testing, Build Process, Configuration Management, and Continuous Deployment. Also, with the heavy usage of FOSS components, the CI pipeline will often need internet access to public repositories to pull in relevant dependencies.

With the dense connectivity requirements that the CI pipeline demands, can segmentation still serve as a valuable security control?

When considering the value of micro-segmentation, it can be presented in two ways:

  1. Micro-segmentation allows for high value assets to be ringfenced, ensuring their exposure to and from the rest of the network is limited, thus making it more difficult for an attacker to reach and exploit them.
  2. Micro-segmentation allows for every workload to have its own least-privilege access rule-based microperimeter encircling it. This ensures that, if it is compromised, the attack surface exposed to it is limited to only what it’s authorised to connect to – ultimately limiting what an attacker could do next.

One approach for making this relevant to the CI pipeline looks something like this:

  • The CI pipeline is definitely a high value asset and one that should be ring-fenced.
  • While it may have a number of internal and external dependencies from a connectivity perspective, these should be well understood and well defined.
  • Using this information, an allowlist-based micro-segmentation policy could be built to ensure that the CI pipeline only has access to and from these well-understood dependencies.
  • If internet access is required, then it should be allowed only for a list of approved repositories and not unfettered internet access. This limits access to only an explicitly defined list of destination sites and is an effective and often simple control to mitigate against C&C and data exfiltration attempts.

This protects the CI pipeline from being accessed by unauthorised devices or over unauthorised ports and protocols. It also ensures that, in the case of compromise of the CI pipeline, the exposure to the rest of the network is limited. Moreover, improved segmentation controls often go hand in hand with significantly better visibility into system interactions, providing richer data for incident detection and response teams to work with when investigating potential breaches.

At Illumio, we help customers leverage micro-segmentation to establish this containment and monitoring. Reach out to your account team to find out how we can support you.

Related topics

No items found.

Related articles

5 Tips to Simplify Workload Labelling for Microsegmentation
Zero Trust Segmentation

5 Tips to Simplify Workload Labelling for Microsegmentation

Here are five tips to simplify for your workload labelling process.

6 Microsegmentation Requirements for Modern Applications
Zero Trust Segmentation

6 Microsegmentation Requirements for Modern Applications

A great deterrent for hackers, organizations are implementing microsegmentation as an essential part of a defense-in-depth security ecosystem.

Why Zero Trust and Segmentation Are Failing Some Organizations
Zero Trust Segmentation

Why Zero Trust and Segmentation Are Failing Some Organizations

This blog post unpacks a new report from analyst Enterprise Strategy Group (ESG) on important learnings about Zero Trust and segmentation.

No items found.

Assume Breach.
Minimize Impact.
Increase Resilience.

Ready to learn more about Zero Trust Segmentation?