A novel attack method on GitHub illustrates yet again why application security (AppSec) teams should be implementing in-depth security measures — beyond what legacy application security testing tools can provide.
The attack, discovered by Praetorian security researcher Adnan Khan, involves GitHub-hosted runners, which are virtual machines that execute jobs in a GitHub Actions workflow. There are two kinds of runners in GitHub Actions, which is one of the biggest continuous integration/continuous delivery (CI/CD) services in the market, largely because it's free for public repositories.
Hosted runners are maintained by GitHub for Windows, OS X, and Linux. There are also self-hosted runners, which are build agents hosted by users. Since they're hosted by users, they're also secured by users, which is why GitHub advises against using self-hosted runners on public repositories.
Khan described in a deep-dive blog post how he discovered a critical misconfiguration vulnerability that provided access to GitHub's internal infrastructure, as well as secrets — and how that access could have been used to inject malicious code into all of GitHub's runner base images, allowing an attacker to conduct a supply chain attack against every GitHub customer that used hosted runners.
Here are the key takeaways from the threat research — and why you need to evolve your AppSec approach with complex binary analysis and reproducible builds.
[ Special Report: The State of Software Supply Chain Security (SSCS) 2024 | Download Report: State of SSCS ]
How attackers could leverage runners for mischief
By default, when a self-hosted runner is attached to a repository, any workflow running in that repository’s context can use that runner. As long as the runs-on field is set to self-hosted, the runner will pick up the workflow and start processing it, Khan said.
For workflows that are on default settings and that feature branches, this isn’t an issue, he continued. Users must have write access to update branches within repositories. The problem is that this also applies to workflows from fork-pull requests.
By changing a workflow file within their fork and then creating a pull request, anyone with a GitHub account can run arbitrary code on a self-hosted runner.
There is one roadblock, but not a very formidable one: GitHub requires a user to be a previous contributor to a repository before their workflows from pull request forks will run without approval. Becoming a contributor, however, is as simple as correcting a typo or making a small code change.
Acceptance of contributors to these repositories may be too lax, but any criticism here must be tempered by reality. A lot of these projects aren't maintained by a lot of people. There's usually a small number of key individuals spearheading the project. They're trying to do things in the right way, but there are bad guys out there seeking to exploit capabilities that are intended for good.
An unforeseen attack vector?
One other condition must be met before an attacker can work their mischief. The self-hosted runner must be non-ephemeral, which means that it is possible to start a process in the background that will continue to run after the job is completed. By default, self-hosted runners are configured to be non-ephemeral.
Once the runner-images repository was set to the default approval setting, had a non-ephemeral self-hosted runner, and the attacker’ had a contributor’s account, they had everything necessary to conduct a public Poisoned Pipeline Execution attack against the runner-images repository’s CI/CD workflows, Khan wrote.
According to a SecurityWeek report, the researchers limited their investigation to repositories belonging to organizations that would pay out high rewards through their bug bounty programs, and have submitted over 20 bug bounty reports, raking in hundreds of thousands of dollars in bounties.
If you're having trouble fixing the flaws that you know about, how are you going to protect against the unknowns? If you don't know what to look for, you won't find it.
This discovery shows how software supply chain attacks are constantly changing. People using these runners never thought they could be used in this way. The organizations that paid the bug bounties after being told about the risk had no idea they were vulnerable. If you're having trouble fixing the flaws that you know about, how are you going to protect against the unkowns? If you don't know what to look for, you won't find it.
Software complexity demands better AppSec tooling
Developing software these days is very complex. There are many moving parts — first-party code, open-source code, and third-party code — and it's hard to think of every potential lever and dial that needs to be appropriately configured. That's why, in addition to traditional application security testing, before the software is released, you need a sort of final exam for the complete software packages, to comprehensively vet and compare complete software packages.
This compromise of GitHub Actions illustrates why organizations need defense in depth. You can't just look at one or two things and feel secure. You have to look at everything. That's why software versions need to be compared to one another. You can't just look at the pieces of the puzzle because, even if the pieces fit together, the final picture may not look like what's on the cover of the box. Only by comparing one version or one build to another can you see what's changed and if those changes are acceptable and correct. That kind of differential analysis is needed to discover the fingerprints of tampering, compromise, or the insertion of malware.
[Differential analysis] is needed to discover the fingerprints of tampering, compromise, or the insertion of malware.
Evolve your AppSec approach
There's no such thing as a 100% secure software application. You never know when something is going to be manipulated in a way you never thought about. There are always risks that are either unknown or being addressed.
Security teams try to do the best they can with what they have. AppSec tools have evolved — from SAST, to DAST, and then software composition analysis (SCA). With the pace and complexity of modern software development, AppSec tools that allow defense in depth to be done at many stages of the process, including the final stage — post-compilation, pre-deployment — are now a requirement.
The Enduring Security Framework, a public-private working group led by the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA), recently stepped up its software supply chain security guidance with a call to for binary analysis and reproducible builds to manage risk.
Complex binary analysis, which focuses on malware, can help organizations evaluate and verify the security of not just internally developed software, but also third-party commercial software in their environment, providing that much-needed final exam.
Keep learning
- Get up to speed on securing AI/ML systems and software with our Special Report. Plus: See the Webinar: The MLephant in the Room.
- Learn how you can go beyond the SBOM with deep visibility and new controls for the software you build or buy. Learn more in our Special Report — and take a deep dive with our white paper.
- Upgrade your software security posture with RL's new guide, Software Supply Chain Security for Dummies.
- Commercial software risk is under-addressed. Get key insights with our Special Report, download the related white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.