LiteLLM and the Canary (Token) in the Coal Mine
A supply chain attack on LiteLLM swept up cloud credentials, SSH keys, and crypto wallets. Most victims never knew. Canary tokens exist for moments like this.
Coal miners once carried canaries underground. Not as pets, but as early warning systems. The birds were highly sensitive to carbon monoxide, and their distress (or silence) bought miners precious seconds to evacuate before a gas pocket turned lethal. The whole system worked on a deceptively simple principle - place a known sensitive object in your environment, and watch what happens to it.

On March 24, 2026, the AI developer community got its own canary moment. But without the canary. The attack on LiteLLM exposed thousands of unsuspecting victims to credential theft, and potential financial losses. This post isn't intended to go into the technical details (see References below), rather focus on how the canary approach could have helped.
What Happened?
LiteLLM is one of the most widely used Python packages in the AI ecosystem. It acts as a unified proxy layer for calling large language models (LLMs) from OpenAI, Anthropic, Google, and others - sitting at the intersection of application workflows and sensitive API credentials. That positioning makes it enormously useful for developers, but equally attractive for attackers.
So, what happened? Two versions (1.82.7 and 1.82.8) of the litellm Python package on PyPI were found to contain malicious code. These packages were published by a threat actor known as TeamPCP after obtaining the maintainer's PyPI credentials, and were available for roughly three hours before PyPI quarantined the package. LiteLLM is downloaded 3.4 million times per day - do the math to understand the extent of the damage.
Interestingly, instead of targeting LiteLLM directly, TeamPCP first compromised Trivy, the open-source vulnerability scanner that LiteLLM used in its own build pipeline. By poisoning a trusted security tool, TeamPCP obtained the credentials needed to publish packages as a trusted LiteLLM maintainer, bypassed the CI/CD workflow, and uploaded the malicious code directly to PyPI. A classic supply chain attack! Both compromised versions included a backdoored file that decoded and executed a hidden payload when the file was imported. Once triggered, the payload could harvest credentials, attempt lateral movement across Kubernetes clusters, install persistent backdoors, and exfiltrate encrypted data to an attacker-controller domain.

This is Exactly the Problem Canary Tokens Solve
Here's the uncomfortable truth - for most victims, the first indication of a breach may have been a security researcher's GitHub issue, or a spike in AWS bill, not their own monitoring systems. This is where Canary Tokens change the equation.
A canary token is a deliberately placed, fake-but-plausible credential or resource that does nothing except alert you the moment it's accessed. You generate a few, drop them in places where an attacker would look for - an environment file, an AWS credentials config, a Kubernetes secret - and then... just wait. If it fires, you know that your environment has been compromised, and worse, the credentials are being actively used.

Applying this to the LiteLLM attack, any developer with a canary AWS access key in their environment, alongside real credentials, would have received an instant alert the moment the malware attempted to use it. The three-hour exposure window would have collapsed to minutes. While canary tokens are not difficult to set up, Thinkst has made it really easy by running a self-service portal based on their open source canarytokens project. You can create a variety of tokens, and have the alert sent to an email address of your choice. If you want to self-host this project on DigitalOcean, or your preferred cloud provider, check out this post.
The LiteLLM incident is not a one-off. TeamPCP has a documented history of targeting developer tooling, and the rapidly expanding AI ecosystem is an easy target. The packages that AI developers rely on - proxies/gateways, build systems, orchestration frameworks, agent toolkits - are all sitting in the same lucrative position as LiteLLM. Loaded with credentials, trusted by default, and largely unmonitored. The canary in the coal mine worked because it was always watching, even when nobody else was. It's time we do the same for our credentials.
References



