Skip to content
AI Primer
breaking

LiteLLM 1.82.8 ships malicious .pth credential stealer on PyPI

Compromised LiteLLM 1.82.7 and 1.82.8 wheels executed a malicious .pth file at install time to exfiltrate credentials, and PyPI quarantined the releases. Treat fresh-package installs and AI infra dependencies as supply-chain risk, and check startup hooks on affected systems.

3 min read
LiteLLM 1.82.8 ships malicious .pth credential stealer on PyPI
LiteLLM 1.82.8 ships malicious .pth credential stealer on PyPI

TL;DR

  • LiteLLM versions 1.82.7 and 1.82.8 on PyPI were reported as compromised releases, and the core issue report says PyPI quarantined them after a malicious wheel was found with a credential-stealing startup hook issue report.
  • The payload was not just a bad import path: Hacker News discussion around the incident says the exploit lived in a Python .pth file, which can execute arbitrary code during install or interpreter startup HN discussion.
  • The affected wheel reportedly targeted high-value secrets including environment variables, SSH keys, and cloud credentials, which makes this more than a broken package update for teams using LiteLLM as an AI gateway or proxy layer issue report HN thread.
  • A separate incident writeup says AI-assisted analysis helped identify the poisoned package, trace the .pth behavior, and get to public disclosure in 72 minutes, but follow-on discussion also flagged that models can still hallucinate about low-level execution details response transcript HN reactions.

What was compromised and how did it run?

Y
Hacker News

[Security]: CRITICAL: Malicious litellm_init.pth in litellm 1.82.8 — credential stealer

932 upvotes · 495 comments

The package report describes a malicious litellm_init.pth file inside the LiteLLM 1.82.8 wheel, sized at 34,628 bytes, and says it could steal "environment variables, SSH keys, cloud credentials" and other secrets when the package was installed via pip issue report. The same report says both 1.82.7 and 1.82.8 were affected and that the releases were quarantined on PyPI issue report.

Y
Hacker News

Discussion around Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised

932 upvotes · 495 comments

The important mechanic is the .pth execution path. In the Hacker News thread, one commenter explained that "Python allows arbitrary code to run" from a .pth file, and that malware can drop code like import sys;exec(...) there HN discussion. That matters because the compromise was not framed as a normal runtime bug inside LiteLLM itself; it was a supply-chain attack embedded in Python packaging behavior before an app even starts serving traffic.

That raises the blast radius for AI infra teams. The original HN post explicitly called out how this kind of poisoned dependency can propagate through "proxy packages and agent frameworks" that sit deep in production pipelines HN thread.

What does this change for AI tooling and incident response?

Y
Hacker News

My minute-by-minute response to the LiteLLM malware attack

431 upvotes · 157 comments

A separate postmortem-style transcript turns the incident into a concrete case study for AI-assisted security response. The writeup says the poisoned upload landed at 10:52 UTC and public disclosure followed at 12:04 UTC, a 72-minute window that included investigating a fork bomb with 11k processes, analyzing the malicious .pth hook, and notifying both PyPI and LiteLLM support response transcript.

Y
Hacker News

Fresh discussion on My minute-by-minute response to the LiteLLM malware attack

431 upvotes · 157 comments

The follow-on discussion is useful because it is less about the malware payload than about operational trust. Commenters asked whether PyPI "digital attestation" or trusted publishing would have changed the outcome, and one argued that once an LLM is in the loop it is "effectively acting as an operator" influencing what gets run and trusted HN reactions. Another commenter said Claude Code "repeatedly made the incorrect assertion" that base64 armoring was normal, a reminder that AI can speed triage while still being unreliable on low-level mechanics HN reactions.

The most concrete mitigation suggestion in the package thread was to slow down adoption of brand-new releases. A commenter noted that "npm/bun/pnpm/uv now all support" minimum package release ages, including a exclude-newer = "7 days" style control in uv fresh mitigations. In this incident, that kind of gate maps directly to the failure mode: a freshly published poisoned wheel in a widely used AI dependency.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

Share on X