A discussion with Rohan Sathe, Co-founder of Nightfall AI, and Unni Patel, Senior Product Manager at Vanta
How should data security teams walk the fine line between enabling AI innovation, safeguarding sensitive data, and ensuring compliance? That question drives everything we build at Nightfall. It’s also an excellent jumping off point for an in-depth discussion among security experts.
Nightfall Co-founder Rohan Sathe sat down with Vanta Senior Product Manager Unni Patel to examine the complexities of AI risk, the importance of robust data loss prevention (DLP) strategies, and how a proactive approach to security and compliance can accelerate safe AI adoption.
Understanding the AI Risk Landscape
Rohan Sathe: We're seeing that the excitement around AI is palpable, but so are the risks. Take the Samsung incident where an engineer had pasted some proprietary code into ChatGPT, and that code ended up in OpenAI's training data. That's a textbook DLP failure.
We've also seen instances where a hospital researcher uploaded patient X-rays to an online AI model which violates HIPAA. That ended up costing the hospital $750,000 in legal settlements.
Over the last few days, xAI had a leak where a developer accidentally published an API key for SpaceX and Tesla's internal AI models on GitHub, which left sensitive systems exposed for nearly two months.
These incidents show how easy it is for confidential data to exfiltrate out of internal systems through AI tools without modern DLP controls in place.
Unni Patel: I think one of the biggest pain points that we see at Vanta is regulatory uncertainty. Companies are trying to make sense of evolving rules like the EU AI Act or ISO 42001 while trying to comply with existing frameworks like GDPR and HIPAA.
At the same time AI is becoming embedded in everything. The models memorize whatever data they're given. You can imagine how easily a single customer record or a snippet of code could be at risk of exposure and cause a security incident.
Combining these two makes it really tough to move fast with confidence. On top of that, there’s a trust factor. Security leaders are asking themselves, “How do I know the AI's output is accurate, unbiased, and safe to use?" And what if something does go wrong like you mentioned earlier with the data leaks? This uncertainty keeps a lot of CISOs up at night. Many of them are moving with caution when it comes to AI adoption, which can slow down the speed of innovation within these organizations.
Building Guardrails: The Role of DLP and Governance
Rohan: The good news is you don't have to ban AI outright or miss out on its benefits. With DLP solutions designed for AI and clear governance and continuous monitoring, organizations can tackle these risks head on.
Once you have these policies in place, technology can operationalize the policy. That's where a DLP solution becomes crucial. Policies alone are great, but mistakes happen. DLP solutions like Nightfall AI can detect sensitive data like personally identifiable information (PII) or API keys in real time.
For example, if somebody tries to input a credit card number into ChatGPT, our systems can block and redact it. This approach protects the organization from accidental leaks.
Unni: A safe path that we recommend at Vanta is to build guardrails early so you can understand and monitor what's going in and coming out. Not just for regulatory changes, but also when you're shipping new features.
One thing we recommend is to start simple with policies that are written in plain English. Your AI use policy in ISO 42001 could be called your AI management plan, which defines your AI purpose, your data boundaries, as well as your stakeholder roles.
Keeping it concise and adding some FAQs, then attaching it to new onboarding or quarterly security trainings can help your organization be aware of not only the policy, but also expectations when it comes to using AI vendors and tools as well.
The Synergy of Data Protection and Compliance
Rohan: I want to dive deeper into how data protection and compliance requirements often complement each other. Often organizations treat them as separate streams. You have security on one side and compliance on the other, but they should work in tandem.
From my perspective, it's essential to see compliance not as a check box, but as a continuous process. For example, if you're monitoring data exfiltration with a DLP solution, you'd want to feed those logs into a compliance platform to provide strong evidence of control effectiveness. This is going to simplify your audits, and it reassures your regulators or customers that you're actively managing any AI related risk.
Unni: Compliance frameworks,like the ones we mentioned above, outline requirements on how you should handle your data, maintain access controls, and manage risk. A DLP solution helps you practically enforce those rules in real time.
Your compliance framework provides a blueprint. One example could be where you outline your processes and best practices to truly enforce the policies that meet the company’s needs. A platform similar to Nightfall ties those things together. It becomes really important with AI data flows that are generally complex and sensitive. Data can be inputted into prompts pretty easily. So catching that early and being proactive about it is key.
Operationalizing AI Security: Practical Steps
Rohan: Nightfall is a AI-native DLP platform that integrates with popular SaaS apps and provides data loss prevention capabilities on endpoints. Recently, we've extended that to generative AI interfaces.
Our approach is to automatically scan, classify, and protect data across channels. If you're worried about employees leaking credentials or PII or corporate IP when using generative AI, Nightfall has you covered. We can help track where that data originates from and its ultimate destination through a concept called data lineage.
Another important best practice is what we call data minimization. If you're building internal AI models, don't feed them sensitive data unless it's absolutely necessary. Techniques like anonymization or tokenization or synthetic data can help maintain data privacy.
Unni: On the compliance side, Vanta can automate the collection of security evidence to help you build that policy. Combining this with a DLP solution like Nightfall can help you stay compliant.
We also encourage organizations to run risk assessments when they are introducing a new AI tool or process. That way you're providing early documentation of your data flows: who has access to what and whether that vendor meets your security and compliance standards.
You definitely don't have to start from scratch. Use the frameworks that you already have and can trust. We talk to a lot of teams who are trying to figure out how to show they're being responsible with AI even before they're fully certified.
That's why we built our lightweight AI security assessment. It's a simple way to evaluate your internal practices and vendors so you can get a clear view of your current AI posture while you're working towards those bigger compliance standards.
Enabling AI Adoption Securely
Rohan: We're all seeing some of that pressure from our boards as well to adopt AI as quickly as we can. We see it from our customers as well when they ask us how we can help them enable the safe usage of AI, and how we are using AI at Nightfall internally to build our product and make sure that all our data is safeguarded.
I think when leadership sees that you've accounted for data protection and regulatory requirements, they're more confident in rolling out AI solutions at scale. That's how we accelerate adoption, rather than stifle it, which is the ultimate goal.
Unni: Let's not forget that internal executive buy-in gets easier when you can show these investments that make AI adoption safer. Board members and stakeholders are eager to see AI-driven initiatives and innovation to unlock a competitive advantage. So I think when security and compliance teams position themselves as enablers, paving the way for safe, responsible AI use, it becomes much easier for those internal teams to get support for the tools and processes they need, and ultimately reduce risk and help the business move forward.
Innovate with Confidence
The journey to secure AI adoption requires a multi-faceted approach that combines robust technology, clear governance, and a commitment to continuous compliance. By understanding the risks, implementing effective DLP solutions, and fostering a culture of security awareness, organizations can confidently harness the power of AI while protecting their most valuable asset: their data.
We hope this discussion has inspired you to accelerate AI adoption by building a robust security and compliance framework. Talk to us about how we can help you unlock the full potential of AI without the fear of regulatory or reputational risk.