For many developers building AI apps, LiteLLM is one of those dependencies that simply works in the background. Install it once, and your code can talk to models from OpenAI, Anthropic, Google, and dozens of other providers through one interface.
That convenience, though, is exactly why the latest security incident feels unsettling. Two recent versions of the popular Python library were pushed to PyPI containing malicious code designed to steal sensitive data from developers’ machines. Because LiteLLM has more than 40,000 stars on GitHub and sits inside a large number of AI development stacks, the blast radius could be wide.
The compromised versions — 1.82.7 and 1.82.8 — included a hidden .pth file named litellm_init.pth. In Python environments, .pth files run automatically whenever Python starts. That means the malware could activate even if a developer never directly imported the library.
In other words, simply having the package installed could trigger the attack. The issue surfaced almost by accident. Researchers at FutureSearch noticed something strange while testing a plugin inside Cursor. The machine suddenly ran out of memory after a runaway process created an exponential number of forks, a classic signal that something deeper had gone wrong.
Once the package was inspected, the scope became clearer. The malicious code behaves like a multi-stage credential stealer. First, it scans the system for anything valuable: SSH keys, cloud credentials, API tokens, database passwords, and configuration files. It even checks for cryptocurrency wallet directories and shell history files.
Developers working in cloud environments face an additional risk. The malware also queries cloud metadata services, meaning it could extract temporary credentials from platforms such as Amazon Web Services, Google Cloud, or Microsoft Azure if the machine runs inside those environments.
Once collected, the data is encrypted using RSA-4096 and AES-256, packed into an archive, and sent to a remote server. Researchers say the malware also attempts to move further inside systems by creating privileged pods in Kubernetes clusters and installing a persistent backdoor.
A short message left in a compromised repository, “teampcp owns BerriAI,” raised suspicions about TeamPCP, a threat actor linked to recent supply-chain attacks targeting tools such as Trivy and KICS. Attribution remains uncertain, though the techniques look strikingly similar.
The episode highlights a growing risk in modern software development. Many developers depend on open-source libraries pulled automatically through package managers. When one of those packages gets compromised, the malware travels the same path as legitimate updates.
For teams using LiteLLM, the first step is checking installed versions immediately. Anyone running 1.82.7 or 1.82.8 should remove the package, clear their Python package cache, and assume credentials on that system may be exposed.
Security experts also recommend rotating SSH keys, API tokens, cloud credentials, and database passwords tied to affected machines. Plus, it might be worth scanning systems for unfamiliar files such as ~/.config/sysmon/sysmon.py, which may indicate persistence mechanisms left behind.
The bigger takeaway goes beyond LiteLLM. The AI tooling ecosystem is expanding quickly, with thousands of small libraries connecting models, APIs, and development tools. Each dependency saves time for developers. Each one also expands the potential attack surface. And supply-chain attacks are increasingly targeting that exact layer.
