Cache Poisoning Caper Turns TanStack NPM Toxic

The Six-Minute Attack That Exposed a Dangerous Supply Chain Gap

On May 11, between 19:20 and 19:26 UTC, a single attacker published 84 malicious versions of official TanStack npm packages. The entire operation took just six minutes. By the time the clock had moved past that narrow window, the damage was already spreading through automated CI/CD pipelines around the world. The payload inside those packages could steal cloud credentials, extract SSH keys, drain crypto wallet configurations, and even wipe an infected machine’s entire disk if certain conditions were met. This was not a theoretical proof of concept. It was a live, automated supply chain attack that exploited a known GitHub Actions vulnerability first documented in 2024.

tanstack npm cache poisoning

The incident, which security firms are tracking as part of the Mini Shai-Hulud campaign, represents a worrying evolution in how attackers target open source ecosystems. The tanstack npm cache poisoning technique used here allowed the attacker to inject malicious code into the build pipeline without compromising a single TanStack maintainer account. That distinction matters. It means that even projects with strong access controls and vigilant maintainers can still fall victim to this class of attack.

The Anatomy of a Six-Minute Supply Chain Blitz

Understanding the timeline helps illustrate just how fast these attacks can unfold. The attacker created a fork of the TanStack repository and inserted a malicious commit. That commit was then used to open a pull request against the main TanStack repository. Because the project had automated CI/CD workflows configured through GitHub Actions, the pull request triggered a build process. That process ran the malicious code embedded in the commit.

The malware’s first objective was to poison the GitHub Actions cache. Once the cache was compromised, subsequent builds that relied on cached dependencies would unknowingly execute the attacker’s code. This is the core mechanism of the tanstack npm cache poisoning attack. The poisoned cache then extracted the npm OpenID Connect (OIDC) token from the runner’s memory. With that token in hand, the attacker could publish malicious packages directly to the npm registry under the TanStack namespace.

Security firm StepSecurity detected the anomalous activity within 30 minutes and reported it, triggering incident response and npm deprecation procedures. GitHub published a security advisory at 21:30 UTC on the same day, listing the affected packages. TanStack founder Tanner Linsley published a postmortem explaining the technical details of the breach. The speed of detection was impressive, but the six-minute window of the attack itself highlights a uncomfortable reality: automated supply chain attacks can execute faster than most human response teams can react.

How the Cache Poisoning Technique Works

Cache poisoning in the context of GitHub Actions is not a new vulnerability. Security researchers first documented this class of issue in 2024. The technique exploits the way GitHub Actions caches dependencies and build artifacts. When a pull request is submitted from a forked repository, the CI/CD pipeline may still have access to cached data from previous builds. If an attacker can manipulate that cache, they can inject malicious code that persists across multiple build runs.

In the TanStack incident, the attacker used a fork-and-PR approach to trigger a build. The malicious commit contained code that specifically targeted the GitHub Actions cache. Once the cache was poisoned, the attacker could extract the OIDC token that npm uses for trusted publishing. This token allowed the attacker to publish packages under the TanStack name without needing to compromise any maintainer’s password, two-factor authentication, or personal access token.

The tanstack npm cache poisoning attack reused the same OIDC token extraction code that was used in the tj-actions breach last year. This suggests that the technique has become standardized among certain threat actors. Once a method for extracting tokens from runner memory is proven effective, it gets repurposed across multiple targets. The security community has known about this risk for some time, but the TanStack incident demonstrates that practical defenses remain insufficient across many open source projects.

Inside the Malicious Payload

The payload embedded in those 84 malicious packages was not subtle. According to StepSecurity’s detailed analysis, the malware reads files from over 100 hardcoded paths on the infected system. These paths include locations where cloud provider credentials are stored, SSH key directories, developer tool configuration files, crypto wallet data, VPN configuration files, messaging application credentials, and shell history files. Shell history is particularly valuable because developers often paste tokens, API keys, and passwords directly into terminal sessions.

The breadth of data targeted by this payload is striking. It is not limited to a single cloud provider or a single type of credential. It casts a wide net across the entire developer toolchain. Anyone who ran npm install, pnpm install, or yarn install against an affected version on May 11 should consider their system compromised. GitHub’s advisory was explicit on this point. The advisory recommended treating any CI environment or developer machine that installed those packages as fully compromised.

The Dead-Man’s Switch Complicates Remediation

Security researcher Nicholas Carlini highlighted one of the most concerning aspects of the payload. The malware installs a dead-man’s switch as a system user service. This service periodically checks whether the stolen GitHub token has been revoked. If it detects that the token is no longer valid, it executes a command to wipe the local disk completely. This mechanism makes remediation significantly more dangerous for anyone who discovers the infection after the fact.

Imagine a developer who realizes days later that their system was compromised. They revoke the stolen token as part of their incident response. The dead-man’s switch detects the revocation and triggers a full disk wipe. Critical project files, local databases, uncommitted work, and personal data could all be destroyed in seconds. This design forces victims to choose between leaving a stolen token active or risking data loss. Neither option is acceptable, and the presence of this mechanism complicates every remediation plan.

What to Do If Your Project Uses TanStack Packages

If your project depends on TanStack packages, the first step is to determine whether you installed any of the affected versions during the attack window. GitHub’s security advisory includes a complete list of the 84 malicious package versions. Cross-reference your lock files and dependency trees against that list. Any match means your system should be treated as compromised.

Rotate every secret that could have been exposed. This includes cloud provider credentials, SSH keys, API tokens, database passwords, and any other sensitive values stored on the affected system. Do not assume that only the paths listed in the malware’s 100-target list are at risk. The malware had access to the entire filesystem of the build runner, so any file readable by that runner could have been exfiltrated.

For organizations managing multiple projects, audit all npm packages installed on May 11 across every CI/CD pipeline. The tanstack npm cache poisoning attack was part of a broader campaign. Other compromised packages include the OpenSearch client, Mistral AI, UiPath, and Guardrails AI. The Mistral AI project has been quarantined on PyPI. Your dependency scanning tools should be updated to flag any of these compromised packages.

Securing GitHub Actions Workflows Against Cache Poisoning

The attack exploited a specific vulnerability in how GitHub Actions handles cached data from forked pull requests. There are several practical steps that teams can take to reduce their exposure. First, configure GitHub Actions workflows to use read-only tokens for pull requests from forked repositories. This limits what a malicious commit can do even if it triggers a build.

Second, avoid using the GitHub Actions cache for build artifacts that originate from untrusted code. If your workflow caches node_modules or other dependency directories, consider using ephemeral build environments that start from a clean state every time. Tools like Docker containers, GitHub Actions ephemeral runners, or dedicated CI/CD services that sandbox each build can prevent cache poisoning from persisting across runs.

Third, audit your OIDC token configuration. The npm OIDC token used for trusted publishing should be scoped as narrowly as possible. If your workflow only needs to publish packages from the main branch, do not allow OIDC token access from pull request triggers. This simple restriction would have prevented the TanStack attack entirely, because the malicious commit arrived through a pull request from a fork.

Ephemeral Environments Are No Longer Optional

One of the uncomfortable truths that this incident reinforces is that running everyday commands like npm install is no longer safe in traditional development environments. The attack surface is simply too large. A single npm install command can execute arbitrary code from dozens of transitive dependencies. If any one of those dependencies has been compromised, the developer’s machine and all connected systems are at risk.

Security professionals have been advocating for isolated, ephemeral development environments for years. The tanstack npm cache poisoning attack makes that recommendation feel urgent rather than aspirational. Running builds inside disposable containers that are destroyed after each session eliminates the risk of persistent malware. It also prevents the dead-man’s switch from having any real impact, because there is no persistent filesystem to wipe.

For individual developers, tools like Dev Containers in VS Code, GitHub Codespaces, or local containerized development setups provide a practical middle ground. These environments can be configured to start fresh from a known-good image, reducing the risk of cache-based attacks. For CI/CD pipelines, ephemeral runners that spin up for a single job and then terminate are becoming a standard best practice.

You may also enjoy reading: 5 Best Indoor Security Cameras for 2024.

The Broader Mini Shai-Hulud Campaign

The TanStack attack did not occur in isolation. It is part of a larger wave of supply chain attacks that security researchers have grouped under the Mini Shai-Hulud campaign. This campaign has targeted both npm and PyPI registries, compromising packages across multiple ecosystems. The OpenSearch client, Mistral AI, UiPath, and Guardrails AI packages were all affected in the same wave.

The fact that the same OIDC token extraction technique was used against both TanStack and tj-actions suggests a coordinated playbook. Attackers are investing in reusable tooling that can be deployed against multiple targets with minimal modification. This reduces the cost of each individual attack and increases the overall threat surface for the open source ecosystem.

PyPI has quarantined the Mistral AI project, and GitHub has published advisories for the affected packages. But the response is necessarily reactive. By the time a malicious package is identified and removed, it may have already been installed by thousands of developers. The speed of automated publishing combined with the latency of human-in-the-loop detection creates a window that attackers are learning to exploit.

Lessons for Open Source Maintainers

No TanStack maintainers were compromised in this attack. That fact is worth emphasizing because it changes how it’s worth noting about responsibility for supply chain security. The breach did not happen because someone clicked a phishing link or reused a weak password. It happened because the automated build pipeline had a vulnerability that allowed cache poisoning from a forked pull request.

For maintainers of popular open source projects, this incident raises difficult questions. How much access should a pull request from a fork have to cached build artifacts? Should OIDC tokens ever be accessible from pull request triggers? What monitoring and alerting is in place to detect anomalous publishing activity? These are not hypothetical questions anymore. They are operational requirements for any project that uses automated publishing workflows.

TanStack founder Tanner Linsley’s postmortem was transparent about the technical details of the breach. That level of transparency is valuable for the broader community because it allows other projects to audit their own workflows for similar vulnerabilities. The tanstack npm cache poisoning attack will likely serve as a case study in supply chain security training for years to come.

Practical Steps for Individual Developers

If you are an individual developer who uses npm regularly, there are several habits that can reduce your risk. First, consider using npm audit and npm query commands to inspect your dependency tree before installation. These tools can flag known malicious packages, though they cannot catch zero-day attacks. Second, avoid running npm install with elevated privileges. The malware in this attack required user-level access to install the dead-man’s switch service, but running as a limited user can still reduce the blast radius.

Third, review your shell history periodically. The malware in this attack specifically targeted shell history files because developers frequently paste sensitive values into terminal sessions. If you have tokens or passwords in your shell history, clear them. Consider using a password manager or secret management tool that does not expose secrets to the terminal history.

Fourth, treat every npm install command as a potential security event. This mindset shift is difficult because npm install is so routine. But the TanStack attack, along with dozens of similar incidents over the past few years, demonstrates that routine commands can have catastrophic consequences when the supply chain is compromised.

The Future of Package Registry Security

Major package registries including npm and PyPI have made significant investments in security over the past several years. Two-factor authentication requirements, package signing, and automated malware scanning have all improved the baseline security posture. But the TanStack incident demonstrates that these measures are not sufficient. The attack exploited a vulnerability in the CI/CD pipeline itself, not in the registry’s authentication mechanisms.

Registry-level improvements alone cannot prevent attacks that target the build process. The OIDC token used for trusted publishing was valid. The attacker did not need to bypass npm’s authentication. They needed to steal a token that was already authorized. This shifts the security burden from the registry to the CI/CD pipeline configuration.

For the long term, the industry may need to rethink how trusted publishing works. Short-lived tokens that are scoped to specific branches, specific workflows, and specific package names could limit the damage of a token theft. Hardware-backed key storage for CI/CD runners could make token extraction from memory more difficult. These are active areas of research and development, but they are not yet widely deployed.

The tanstack npm cache poisoning attack is a reminder that supply chain security is a moving target. Every improvement in registry security prompts attackers to shift their focus to the next weakest link. Today, that weakest link appears to be the CI/CD pipeline itself. Tomorrow, it could be something else. The only sustainable approach is to build systems that assume compromise and limit the blast radius when it occurs.

Add Comment