It’s been a year since the announcement of the devastating Capital One data breach which affected over 100 million customers across the US and Canada. The hack, carried out by former Amazon Web Services engineer Paige Thompson, is one of the largest and most high-profile security incidents of 2019 and perhaps among the most notable financial breaches in recent memory. While the breach is over a year old, the saga is far from over with Capital One facing ongoing litigation over the breach through active class action suits both in the US and Canada. Most recently, Capital one has been compelled to turn over the results of its third-party forensics investigation as part of the discovery process in order to help plaintiffs better understand the damages resulting from the incident. The fallout from the breach continues to put into perspective the consequences of unmitigated cloud security risks and poor visibility for organizations managing sensitive data in the cloud. The circumstances surrounding the breach are also a great primer on the most common mistakes and threats facing organizations after migrating to the cloud, making it a great case study on the basics of cloud security.
What caused the Capital One breach?
The details of the breach have been covered in depth elsewhere (see, for instance, Krebs on Security). Still, it’s important to review the key details in order to contextualize its impacts and lessons. First, the hack which resulted in the breach was carried out by former AWS engineer Paige Thompson, who last held her role in 2016. The hack did not require insider knowledge or resources. However, her extensive knowledge of the AWS platform or potential knowledge about AWS customers might have aided her to some degree as the hack targeted S3 buckets within Capital One’s AWS environment. The actual breach occurred in March of 2019, although it wasn’t discovered until July of 2019. Given the sheer size and scope of the breach, initial speculation assumed that the hack was the result of a sophisticated group relying on some yet unknown zero-day exploit. However, it was revealed that the breach leveraged a misconfigured web application firewall on Capital One’s servers. Web application firewalls or WAFs monitor traffic between web applications and the internet to detect and prevent attacks leveraging code injections (SQL injections, cross-site scripting) or exploitation vulnerabilities involving mail servers, user authentication, and session management. In the case of the Capital One hack, an open source WAF called ModSecurity was deployed alongside an Apache Web server.Though the circumstances weren’t exactly the same as those in the massive 2017 Equifax breach, experts couldn’t help but draw attention to some of the striking similarities. The Capital One breach centered around a poorly implemented security control and the Equifax breach centered around unpatched infrastructure. In both cases, both breaches were effectively preventable. There’s even evidence that Capital One, like Equifax, had ample warning that their environment was poorly secured. The similarities are made even more surreal by the fact that Capital One sought legal counsel from the attorney who represented Equifax. At the crux of the Capital One breach was the aforementioned WAF and the EC2 instance it ran in. The misconfigured WAF was set up in a way where it seemingly permitted external requests, from the internet to internal resources—an attack known as server-side request forgery (SSRF). The EC2 instance itself had read access to all the files in any S3 buckets on the server through its very broad, and likely unintended, IAM role permissions. These two critical mistakes were the only things needed to access the S3 buckets containing Capital One’s customer data.
The three most important lessons from the Capital One breach
Keep the shared responsibility model in mind to avoid negligence
When it comes to public cloud and even SaaS platforms, all users will need to keep the shared responsibility model top of mind. While cloud providers like AWS and even SaaS platforms like Slack provide users with essential uptime and security guarantees like (encryption in transit), there are specific aspects of cloud security that are the responsibility of platform users. Maintaining the appropriate configuration for security controls and access permissions are responsibilities that fall squarely on users. It’s not clear how well Capital One understood this. Although some have argued that AWS might be partly to blame for the breach—both Google Cloud and Microsoft Azure have built in protections against SSRFs while AWS hadn’t at the time—it’s also the case that there were fundamental information security practices that Capital One failed to implement in its environment. A review of the WAF’s configuration would have been able to determine whether or not it was a sufficient defense for Capital One’s data. Similarly, had the principle of least privilege been applied within the Capital One’s AWS environment, no one EC2 instance would have had full read access to Capital One’s sensitive data. While it might be the case that a provider’s environment could more generally afford users more security by default and access to more granular controls, no matter how well implemented these features are, it will never be the responsibility of the provider to ensure these controls and defaults are optimized for your specific security needs. As we’ve stated in our ITProPortal article, it’s the user’s job to understand how their security requirements will translate to the cloud once they’ve migrated. This includes understanding how your specific security controls fit within your environment so that you configure them properly.
Always evaluate third party risk
While Paige Thompson no longer worked for Amazon Web Services at the time of the hack, the reason her former role came up during the investigation was because of the possibility that she leveraged insider knowledge in her attack. Although this possibility is speculative—Thompson’s execution of the SSRF wouldn’t have necessarily required extensive knowledge of AWS systems or customers—it’s not outside the bounds of possibility. Regardless of how relevant Thompson’s status as an insider is to the hack, it highlights the fact that insider risks can come from third parties, former or current.
Never leave your data unattended in the cloud
Although the Capital one breach took place between March 22 and March 23, Capital One didn’t learn about it until July, revealing that they had limited visibility within their cloud environment. Poor visibility means being unable to monitor how robust your security configurations actually are as well as being unable to monitor who is accessing your data. Perhaps the biggest takeaway from incidents like this breach is that more organizations need to focus on data-centric security. This means properly identifying the level of sensitivity for every bit of data within your environments and restricting access as appropriate with controls. That’s where tools like Nightfall come in. As a data discovery platform, Nightfall discovers, classifies, and protects data in the cloud with automated security policy enforcement. Nightfall, alongside security best practices, helps teams ensure that their data is safe from threats within environments like AWS. If you’re interested in learning more about Nightfall for AWS, read our Guide to Data Loss Prevention (DLP) in AWS S3.
Nightfall is the industry’s first cloud-native data loss prevention solution. Designed to integrate seamlessly with a variety of corporate SaaS and IaaS platforms like Slack& AWS, Nightfall is a solution to data sprawl and the data visibility issues that come with rapid cloud migration. You can learn more about our platform by scheduling a demo below.