On March 24, 2026, thousands of development pipelines ran a routine vulnerability scan. What they actually executed was a credential stealer.
A criminal group called TeamPCP had spent weeks working their way through the open source software supply chain, compromising developer tools one by one and using credentials stolen from each to reach the next. Their final target was LiteLLM, a widely used Python package that routes API requests across large language model providers. Attackers compromised Trivy, a widely used open source vulnerability scanner, stole a privileged CI/CD token from its build pipeline, and used that token to publish two backdoored versions of LiteLLM to PyPI. The malicious code deployed a three-stage payload: harvesting SSH keys, cloud credentials, Kubernetes secrets, and environment variables; moving laterally across infrastructure; and installing a persistent backdoor. Data was encrypted and exfiltrated before the packages were quarantined.
Mercor confirmed it was among the thousands of affected organizations. The $10 billion AI recruiting startup works with companies including OpenAI and Anthropic, and extortion group Lapsus$ subsequently claimed it had accessed Mercor's data, sharing samples that included Slack messages, ticketing data, and recordings of internal conversations between the company's AI systems and contractors.
The breach wasn't loud. It was methodical and silent, and it worked by exploiting the trust that development pipelines automatically extend to the tools inside them.
Why AI Infrastructure Is the New Target
LiteLLM is an AI gateway: it routes requests to over 100 large language model providers through a single interface and, by necessity, holds the API keys for all of them. As SANS Institute noted, a typical LiteLLM deployment concentrates credentials for every LLM provider an organization uses in one place. That's not a design flaw. It's how the tool works. But it also makes it exactly the kind of high-value target attackers will keep pursuing.
The campaign followed a deliberate logic. TeamPCP started with Trivy because security scanners run with broad read access to the environments they scan, including environment variables, configuration files, and CI/CD runner memory. Compromising one hands an attacker every credential that tool was trusted to touch. From Trivy, they moved to Checkmarx KICS. From KICS, they reached LiteLLM. Each breach funded the next. Each pivot used legitimate access stolen from a trusted tool to reach a more valuable one.
This is what supply chain risk actually looks like at scale in 2026: not a single exploit, but a campaign that moves horizontally through developer infrastructure, compounding access with each step. SANS documented the attack crossing five ecosystems in six days. One stolen token became a multi-ecosystem compromise.
The Data That Gets Out Without a Signature
Across an incident affecting thousands of organizations, it's a safe assumption that most had some form of data loss prevention in place. The tools failed anyway, and not because they were misconfigured.
Among the data reportedly exposed in this campaign was Slack data and recordings of internal conversations. Not a database of PII. Internal communications, operational context, the kind of material that has no signature to match and no label to trigger a rule. Source code with embedded credentials. Unreleased roadmaps. Proprietary research. This is the data that defines real organizational risk, and it's exactly what pattern-based detection was never designed to find.
The deeper issue is architectural. AI coding agents have direct access to internal repositories. Analysts connect AI tools to production systems. Employees route work through dozens of SaaS applications. The perimeter-based model doesn't just have gaps in this environment. It has no meaningful surface left to defend.
Three Things a Modern Security Program Has to Do
The LiteLLM breach clarifies what a DLP program actually needs to deliver in this environment. Not a compliance checkbox. Answers to three concrete questions.
Do you know where your sensitive data lives?
Not in theory. In practice, across every SaaS tool, AI application, and collaborative platform in your environment. The assets at risk in this campaign, source code, internal communications, credentials in commit history, don't carry labels. Continuous, context-aware classification is the prerequisite for everything else.
Can you detect and respond before damage is done?
Real-time identification of sensitive data in motion, across endpoints, SaaS, AI apps, and agentic workflows, combined with the forensic depth to make investigations conclusive: session replay, data lineage, file provenance. If your current program can alert but not explain, that's the gap.
Can you govern what your AI agents access and expose?
As Dark Reading reported, once TeamPCP had valid credentials from an AI infrastructure tool, they moved directly into AWS and Azure environments, harvesting from S3 buckets, Kubernetes clusters, and Secrets Manager. The blast radius was cloud-wide. Most security teams still have no inventory of which MCP servers are running in their environment, no visibility into what those agents are accessing, and no audit trail of what's being sent where. Every tool call is a potential exfiltration event.
What This Looks Like in Practice
Closing these gaps requires capabilities that work together, not point solutions that address each problem in isolation.
Data discovery and classification has to be continuous and context-aware. The assets at risk in incidents like this one rarely carry labels legacy tools recognize. Without ongoing visibility into where sensitive data lives and how it's moving, response is always after the fact.
Detection and response needs to operate in real time across every channel data can exit, with AI-powered classification that understands context, not just signatures, and forensic depth that makes investigations conclusive rather than inconclusive.
For MCP and agentic workflows, the minimum bar is knowing what agents are running, what data they can reach, and what they're doing with it. Automatic redaction and blocking when agents handle data they shouldn't, combined with continuous monitoring for supply chain changes to trusted servers, is what enforcement actually looks like in this environment.
Nyx brings autonomous investigation and response across all of the above, closing the gap between detection and action without requiring a fully staffed SOC to triage every alert.
The Pattern Will Continue
At RSA Conference 2026, incident responders warned publicly that widespread breach disclosures and follow-on attacks from this campaign would continue unfolding for months. That's consistent with how TeamPCP operates: stolen credentials don't expire immediately, and the group has demonstrated it knows how to convert access into leverage systematically.
The harder truth is structural. The tools developers trust with the broadest access, security scanners, AI gateways, package managers, are the tools attackers will continue targeting. Compromising one hands them everything downstream. That's not going to change as AI infrastructure becomes more deeply embedded in how organizations build and operate.
The organizations that weather this well aren't the ones that stop using AI tools. They're the ones that build security programs capable of seeing what those tools are doing, protecting the data they touch, and responding before the access compounds into a breach.
That's what modern DLP has to deliver.
Learn more about how Nightfall approaches data exfiltration prevention, MCP and AI agent security, and data detection and response. Schedule a demo to see the platform in action.


