On April 30, 2026, a supply chain compromise was identified in the lightning PyPI package — versions 2.6.2 and 2.6.3. The project’s GitHub account shows signs of compromise, with issues reporting the attack closed rapidly by suspicious responses.
This is a developing story.
On import, a daemon thread silently downloads the Bun JavaScript runtime from GitHub and executes router_runtime.js — an 11 MB heavily obfuscated payload. The malware steals tokens, credentials, environment variables, and cloud secrets; abuses the GitHub API to commit exfiltrated data to repositories using the victim’s own credentials; and infects npm package tarballs on the developer’s machine. The project’s GitHub account shows signs of compromise, with issues reporting the attack closed rapidly by suspicious responses.
We analyzed both wheels. The last clean release is 2.6.1, published January 30, 2026. Full payload analysis is ongoing and this post will be updated as additional details become available.
Background: What Is lightning?
The lightning package (Lightning AI, formerly PyTorch Lightning) is one of the most widely used deep learning frameworks in the Python ecosystem. It provides a high-level interface for training PyTorch models and is a common dependency in research environments, MLOps pipelines, and production AI systems, receiving hundreds of thousands of daily downloads. Environments running lightning routinely hold GPU cluster credentials, cloud IAM tokens, Hugging Face API keys, Weights & Biases tokens, and other high-value secrets tied to model training infrastructure — making it a high-value target for a credential-stealing campaign.
The Entry Point: Hidden _runtime Directory
Both compromised wheels bundle a hidden _runtime/ directory containing two files:
_runtime/start.py— downloads the Bun JavaScript runtime binary from GitHub_runtime/router_runtime.js— an 11 MB heavily obfuscated JavaScript payload
The attack fires the moment the package is imported, via a daemon thread with suppressed stdout and stderr. No user action beyond installation is required. The thread runs silently in the background while the victim’s process continues normally with no errors or visible output.
The Shai-Hulud pattern: Downloading an external runtime at execution time to run a large obfuscated payload is a hallmark of the Shai-Hulud supply chain attack family, first documented in the November 2025 npm compromise that affected 780+ packages. Because the credential-stealing logic lives entirely in an obfuscated JS file fetched at runtime, static analysis of the Python wheel alone cannot reveal the payload’s capabilities.
Payload Capabilities
Static analysis of router_runtime.js reveals 703+ references to process and environment variables, 463+ references to tokens and authentication material, and 336+ references to repositories. The payload’s confirmed capabilities include:
- Credential theft: tokens, API keys, authentication material, environment variables, and cloud secrets
- GitHub API abuse: commits encoded stolen data to repositories using the victim’s own GitHub credentials
- npm tarball poisoning: injects malicious code into npm package tarballs on the developer’s machine, enabling the attack to spread to downstream npm consumers
GitHub Account Compromise
The Lightning AI GitHub repository shows indicators of account compromise coinciding with the malicious releases. Issue #21689, which surfaced the attack, was rapidly closed with suspicious responses. This pattern is consistent with the attacker holding both PyPI publishing credentials and a GitHub token — a token-based takeover rather than a direct build-system compromise.
How StepSecurity Helps
Harden-Runner detects this attack at the process layer. Attacks in this family use a Python child process to scan /proc for the GitHub Actions Runner.Worker process and dump its readable memory via /proc/{pid}/mem, extracting secrets held in the runner’s address space. Harden-Runner’s process monitor flags this unauthorized memory access as a suspicious process event.
References



