Nightfall AI’s 2026 AI Agent Risk Report: Understand AI risk. Protect your data
Get the report

After the Vercel Breach, Do You Know What Your AI Tools Can Access?

On this page

In April 2026, Vercel disclosed that attackers had accessed internal systems and customer credentials — not by breaking into Vercel directly, but by compromising a third-party AI tool one of its employees had connected to their corporate account.

The breach traced back to February, when a Context.ai employee downloaded what they thought were Roblox game scripts. The files installed Lumma Stealer, a credential-harvesting malware that quietly extracted logins, API keys, and OAuth tokens from the infected machine. Months later, an attacker used those stolen credentials to compromise Context.ai's AWS environment, including an OAuth token tied to a Vercel employee's Google Workspace account. That single token, granted with "Allow All" permissions, gave the attacker a direct path into Vercel's internal systems, environment variables, and a limited set of customer credentials.

The attacker never needed to break through Vercel's perimeter. They found a trusted connection and walked through the front door.

The Attack Surface No One Drew on the Whiteboard

What the Vercel incident illustrates is a problem many security programs haven't fully internalized: the attack surface today isn't just your own infrastructure. It's every application your employees have connected to, every OAuth grant they've approved, and every third-party tool that holds a token into your environment. In most organizations, no one has a complete inventory of those connections.

The Vercel employee who linked Context.ai to their enterprise Google Workspace account wasn't being reckless. They were using a productivity tool. But that action created an implicit trust relationship: a credential that lived outside Vercel's control, on infrastructure Vercel couldn't see or monitor, with permissions that reached deep into their identity environment. According to Vercel's advisory, environment variables not explicitly marked as "sensitive" were readable by anyone with that internal access. The attacker walked through those variables systematically — a technique called enumeration — until they had what they needed.

The stolen data reportedly included API keys, source code, and database credentials. Vercel's CEO described the attackers as moving with "surprising velocity" and suspected AI played a role in accelerating the intrusion. Legacy DLP tools wouldn't have flagged any of it. Not because the attack was uniquely exotic, but because legacy DLP isn't designed to look in the right places.

The Blind Spot in Traditional Data Protection

Most legacy DLP solutions were designed to catch a specific class of sensitive data: Social Security numbers, credit card numbers, protected health information. The logic is pattern-based. A tool looks for data that matches a known format and flags it when it moves somewhere it shouldn't.

That model breaks down when the data that matters most doesn't have a regex pattern. API keys embedded in a developer environment don't look like PII to a traditional scanner. Neither does internal architecture documentation, source code for an unreleased product, the system prompt that defines how a customer-facing AI agent behaves, or the M&A target list sitting in a shared Notion workspace. These are assets that carry enormous competitive, legal, and operational risk. They require context to identify as sensitive, and legacy tools, by design, lack that context.

Every Integration Is a Standing Credential

The Vercel breach is best understood as an identity supply chain attack. A trusted third-party application became an attack vector because the permissions it held were never properly scoped or revoked. Context.ai's OAuth app, according to reporting by CyberScoop, potentially affected hundreds of users across many organizations. Any of them could have faced the same exposure. This is the nature of interconnected cloud environments: every SaaS integration is a potential lateral movement path, every OAuth grant is a standing credential that persists until someone actively revokes it, and security teams rarely have full visibility into either.

The scale of this problem is growing faster than most programs can track, and MCP is the reason why. The Model Context Protocol is an emerging standard that allows AI agents to connect directly to tools and data sources with a single line of configuration. There are already more than 20,000 MCP servers available. A developer can point their AI coding assistant at a GitHub repository, an internal database, or a document store and begin querying it in minutes.

What makes this categorically different from earlier SaaS integrations isn't just the speed of adoption. It's that existing security tooling has no visibility into it. MCP tool calls don't appear in traditional audit logs. Data flowing through an agent prompt isn't captured by legacy DLP. Security teams have no native way to inventory which MCP servers employees have connected, what those servers can access, or what data is being retrieved and sent to external AI services. The Vercel attack required an attacker to first steal credentials and then exploit an OAuth token to reach sensitive data. An MCP-connected agent, given overly broad access, can reach the same data without any of those intermediate steps.

The Connections You Don't Know About Are the Ones That Will Hurt You

The foundational problem the Vercel breach exposes isn't a policy failure or a permissions misconfiguration. It's an observability failure. No one knew the connection existed in a way that could be acted on. If that's true of a conventional OAuth integration today, it's an order of magnitude more true of MCP connections, where the tooling to see them barely exists yet.

MCP observability is the preventive control the Vercel breach was missing and that most organizations still lack. It means knowing, in real time, which AI agents are running in your environment, which servers they're connected to, what tools those servers expose, and what data is moving through every prompt and response. Without that baseline, classification and enforcement have nothing to operate on. You cannot protect data flowing through a channel you cannot see. Nightfall's MCP security is built around closing that gap first: automatically discovering agent connections across environments, mapping what each agent can access, and logging every tool call with enough detail to support both real-time enforcement and forensic review after the fact.

Once that visibility is in place, classification has to operate on meaning rather than pattern. Source code, system prompts, internal financial models, and unreleased product roadmaps all need to be recognized as sensitive regardless of whether they contain PII, because the risk they carry is competitive, not just regulatory. Nightfall's detection models understand context, so "our Q4 roadmap" or "unreleased investor update" gets flagged without a security team writing a rule for it first. Enforcement then acts on that classification in real time, across every surface where data moves, including the MCP layer that traditional tools were never designed to reach.

The goal isn't to slow down developer productivity or restrict what AI tools employees can use. It's to make every connection legible, every data flow accountable, and every trust relationship something security teams can actually manage.

The Posture the Current Environment Requires

The Vercel breach will be studied as an example of OAuth supply chain risk. But the more uncomfortable read is this: Vercel had encrypted customer data, defense-in-depth mechanisms, and a sophisticated security posture. The attacker still found a path in through a connection no one was watching.

Every organization running agentic AI workflows today is in a version of that same position. The connections are being made. The trust relationships are being extended. The question isn't whether your employees are using AI tools that touch sensitive systems. They are. The question is whether your security program can see those connections, classify what's at risk within them, and act before someone else does.

As agentic AI and MCP connections become standard parts of the developer workflow, most security teams don't have a complete picture of which tools and agents have access to their environment. See how Nightfall closes that gap.

Schedule a live demo

Tell us a little about yourself and we'll connect you with a Nightfall expert who can share more about the product and answer any questions you have.
Not yet ready for a demo? Read our latest e-book, Protecting Sensitive Data from Shadow AI.