The security landscape is shifting. For the past two years, security teams have focused primarily on what users type into chatbots by monitoring interactions with ChatGPT, Gemini, and Claude. But a new risk vector is emerging, one that operates largely outside traditional security controls: AI agents accessing corporate data autonomously through the Model Context Protocol (MCP).
Understanding the MCP Risk
AI agents need data to be useful. MCP serves as the bridge that makes this happen, allowing developer tools like VS Code, Cursor, and Claude Desktop to connect directly to corporate data sources. This dramatically improves AI capabilities—imagine your coding assistant having full context from your company's repositories.
It also creates a massive security blind spot.
The scale of this problem is growing rapidly. Today, there are thousands of MCP servers, and could reach over 100,000 within six months.
Nightfall AI CEO & Co-Founder Rohan Sathe walks through the MCP security risk and what Nightfall is doing to prevent it in this video:
Where Legacy DLP Controls Fall Short
MCP connections operate in ways that bypass most existing security infrastructure. The protocol supports two distinct communication patterns, each with different implications for security monitoring:
Local inter-process communication happens entirely within a single machine. When an application accesses your local file system through MCP, no network traffic is generated. This completely bypasses network monitoring, proxies, and traditional DLP systems.
Remote MCP connections do traverse the network, but distinguishing MCP traffic from normal application traffic proves difficult for legacy systems. An AI agent could read customer PII from a database and exfiltrate it via direct connection to an external service without triggering alerts or leaving meaningful audit trails.
The result is what we consistently hear from CISOs and engineering teams: a discovery gap (they don't know what MCP servers exist in their environment) and trust erosion (they can't verify whether an MCP server is legitimate or malicious, or distinguish between enterprise and personal AI usage).
One CISO recently told us they only discover MCP connections through daily Slack confessions from their development team.
The Challenge of Machine-Initiated Data Movement
Up until now, data loss prevention has focused on human-initiated data movement. Security teams built policies, investigations, and governance around what people do with sensitive information. But agentic AI introduces a fundamentally different challenge: machines moving data on our behalf, continuously and autonomously.
This isn't a distant future scenario. Organizations are actively encouraging AI adoption, often with mandates from boards to maximize AI usage across teams. Developers and other employees are connecting these tools to move faster, and that adoption is happening without security oversight.
A Detection-First Approach
Effective MCP security requires visibility before enforcement. The path forward moves from detection to governance, starting with precise identification of which MCP servers are in use within your environment.
This begins with forensic capabilities that capture MCP activity without introducing friction. Security teams need to see which MCP calls are being made, which applications are involved, and which users are leveraging these connections. Critically, this monitoring must work without a man-in-the-middle proxy that could add latency or alert users that their AI tools are being monitored.
Gateway-based approaches to MCP governance have emerged, but they face inherent limitations. Gateways only protect sanctioned MCP usage, leaving rogue or malicious connections undetected. If they introduce latency or friction, developers and other employees will route around them.
Raw usage data becomes actionable through application intelligence that aggregates MCP activity into app-by-app metrics. This transforms scattered connection data into a clear picture of what's happening across the organization and who's using which tools.
Building for Both Human and AI Risk
The ultimate goal is comprehensive coverage for both local and remote MCP connections, regardless of who created them. This means protecting against both sanctioned enterprise tools and unsanctioned or malicious servers that appear in your environment.
Organizations need fewer blind spots, faster investigations when MCP usage raises concerns, and ultimately safer AI and SaaS adoption across the company. The security architecture must account for the reality that data movement now includes both human actions and autonomous agent behavior.
Moving Forward
MCP represents a fundamental shift in how applications access and move data. As the protocol continues to gain adoption and the number of MCP servers grows exponentially, the window for establishing security controls is narrowing.
The organizations that move quickly to gain visibility into MCP usage will be better positioned to govern it effectively. Those that wait risk discovering their exposure only after an incident, through those daily Slack confessions—or worse.
View the full session on MCP security from our recent webinar here.
Nightfall is building industry-first visibility and control capabilities for MCP and agentic AI security. To learn more about protecting your organization against this emerging vector, reach out to our team at sales@nightfall.ai, or schedule a personalized demo.


