In the latest major cloud security foul-up, Capital One suffered a data breach, which affected 100 million people in the United States, and 6 million in Canada. It wasn’t just Capital One caught with their security pants down. We now know hacker Paige A. Thompson stole terabytes of data from more than thirty other companies, educational institutions, and other entities.”
Just another day of companies fouling up their security? No, this one’s different. First, we already know a lot about what happened. We know Capital One relies heavily on Amazon Web Services (AWS). and the attack was made on data kept in Amazon Simple Storage Service (S3). But, instead of an attack on a S3 without any security, this attack worked thanks to a firewall configuration blunder.
In short, these breaches weren’t because of criminally stupid security mistakes. They appear to have been made because companies simply did a poor job of maintaining their security.
Let’s look closer. The misconfiguration of Capital One’s ModSecurity Web Application Firewall (WAF) enabled the attacker, a former AWS employee, to trick the firewall into relaying requests to a key AWS back-end resource. Armed with this, the hacker used a Server Side Request Forgery (SSRF) attack to trick the firewall into letting the attacker in.
We’ll be seeing a lot more of this kind of attack. As Evan Johnson, a Cloudflare product security team manager, wrote, “The problem is common and well-known, but hard to prevent and does not have any mitigations built in to the AWS platform.”
So, clearly some of the blame can be laid on AWS’s store. But, as the alleged attacker herself said of AWS configurations, “Dude, so many people are doing it wrong.” She said this after virtually trying the doors of hundreds of companies to find those, which were unlocked. Maybe Gartner was on to something when they predicted, “95% of cloud security failures will be the customer’s fault.”
Some people, such as Sen. Ron Wyden, however, are putting much of the onus for this breach on AWS. Not so fast.
Yes, AWS has some explaining to do, but the real problem is if you have poor security practices, you will get burned. The bigger the cloud, the bigger the burn.
This breach was not, as security maven Brian Krebs pointed out, caused by a ” previously unknown ‘zero-day’ flaw, or an ‘insider’ attack,” but by well known attacks using well-known mistakes.
But, who’s really at fault in this set of security disasters is it the cloud provider or the company, which uses the cloud? The answer is both of them.
Customers and cloud providers are each in charge of different parts of the cloud stack. This concept is called the Shared Responsibility Model (SRM). A quick way of thinking about this model is cloud providers are responsible for the security of the cloud, and cloud users are responsible for security in the cloud.
Both AWS and Microsoft Azure explicitly endorse this model. But, all public clouds use it to one extent or another. It’s the foundation for both the technological and contractual ways we currently deal with cloud security.
At the most basic level, it means you’re in charge of everything above the hypervisor level. That includes the guest operating system, your application software, the cloud instance’s firewall and encrypting data both in-transit and at-rest. The cloud provider takes care of the host operating system, the virtualization layer, and its facilities’ physical security.
Of course, in the real world, it’s never that simple.
AWS states, “Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall.”
It’s that last bit where things went awry for Capital One. Yes, it appears they hadn’t set up their firewall correctly. But, AWS makes it easy to access the AWS Identity and Access Management (IAM) Role temporary credentials. Armed with these temporary credentials it’s relatively easy to make a SSRF attack.
Johnson claims there are several ways to blunt the use of temporary credentials. Netflix has also shown you can spot temporary security credential use in your AWS clouds. So, yes, AWS, can do a better job of locking down its firewalls, but, again, Capital One of setting up the firewall in the first place. In short, it’s all rather messy.
That’s not surprising. As KirkpatrickPrice points out, cloud “security requirements as a spectrum. Cloud service customers add together all of the regulatory, industry, and business requirements (GDPR, PCI DSS, contracts, etc.) that apply to their organization and the sum equals all of that organization’s specific security requirements. These security requirements will help ensure that data is confidential, has integrity, and is available.
On one end of the security requirement spectrum is cloud service providers and on the other is cloud service customers. The provider is responsible for some of these security requirements, and the customer is responsible for the rest, but some should be met by both parties. Cloud service providers and cloud service customers both have an obligation to protect data.”
But, where do you draw the line between who’s in charge of what? That’s not easy either. There is no one security size fits all cloud solution. For example, if you use a sSoftware-as-a-service (SaaS) office suite, such as Google’s GSuite, clearly, Google, and not you, are in charge of the software. If you’re running your own application on a Platform-as-a-Service (PaaS), you, however, take both the credit–and the blame–for how that program runs.
If you look closely, you’ll see AWS has three different SRMs. These are infrastructure services; container services; and abstract services. Azure and other public cloud services have similar security policy setups.
Infrastructure include compute services such as EC2 and supporting services like Elastic Block Store (EBS), auto scaling, and Virtual Private Networks (VPC). With this model, you install and configure your operating systems and platforms in the AWS cloud just as you would on premises or in your own data center. On top of this you install your applications. Ultimately, your data resides in and is managed by your own applications.
Despite the name, container services in this context has little to do with Docker and similar technologies that spring to mind when you think of containers. Instead these are services you typically run on separate Amazon EC2 or other infrastructure instances, but sometimes you don’t manage the operating system or the platform layer.
AWS provides managed services, but you’re responsible for setting up and managing network controls, such as firewall rules, and for managing platform-level identity and access management separately from IAM. Examples of container services include Amazon Relational Database Services (Amazon RDS), Amazon Elastic Map Reduce (Amazon EMR) and AWS Elastic Beanstalk.
Here, AWS manages the underlying infrastructure and foundation services, the operating system, and the application platform. For example, with Amazon RDS AWS manages all the layers of the container, up to and including the Oracle database platform. But, while the AWS platform provides data backup and recovery tools; it’s your job to take care of your business continuity and disaster recovery policy. You’re also responsible for the data and for firewall rules. So while Amazon RDS provides the firewall software, it’s your job to manage the firewall.
Abstracted services are high-level storage, database, and messaging services. They include Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, and Amazon Simple Email Service. These abstract the platform or management layer on which you can build and operate cloud applications. You do this using their AWS APIs. AWS manages the underlying service components or the operating system.
Here, your security job is to manage your data by using IAM tools to apply Access-Control List (ACL) style permissions to individual resources at the platform level, or user identity or user responsibility permissions at the IAM user/group level.
Let’s look at a simple specific example. Amazon categorizes Amazon Elastic Compute Cloud (Amazon EC2) as an Infrastructure as a Service (IaaS) cloud. With it, you’re responsible for managing the guest operating system (including updates and security patches), any application software or utilities you’ve installed on the instances, and the configuration of each instance’s AWS-provided firewall, aka a security group. But, with Amazon S3 “AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.”
Get the point? Both are IaaSes but they have different rules.
The moral of the story is you must look carefully at every–and I mean every–cloud SRM service agreement. Still, while you must look at exactly what’s what with every service you use and who’s responsible for each one, the basic concept isn’t too complicated. The cloud provider are responsible for the security of the cloud, and you’re responsible for security in the cloud
Cloud-native computing has muddied what’s what in SRMs. For instance, AWS now offers AWS Lambda. This is a serverless cloud approach which lets you run code without provisioning or managing servers. So without a server per se, who takes responsibility for the, well, server?
According to Amazon, with Lambda “AWS manages the underlying infrastructure and foundation services, the operating system, and the application platform. You are responsible for the security of your code, the storage and accessibility of sensitive data, and identity and access management (IAM) to the Lambda service and within your function.”
This leaves questions open. For example, since you’re using Lambda to run your code, where does the responsibility for your code end and Lambda’s begin?
As Gadi Naor CTO and co-founder of Alcide, a full-stack cloud-native security platform company, observed, “using a serverless architecture means that organizations have new blind spots, simply because they no longer have access to the architecture’s operating system, preventing them from adding firewalls, host-based intrusion prevention or workload protection tools in these workloads.”
This is only the beginning. For example, Kuberrnetes is enabling hybrid-cloud to run along multiple clouds at once. So, if you’re running a program that spans between the new Red Hat-based IBM cloud and AWS, who’s in charge of securing the entire project? Who takes the blame when something goes wrong? And, last but never least, who pays when the end-users sue?
Good questions aren’t they? We still don’t have good answers for this new complex cloud world we’re entering.
So, what can you do? First, make sure you understand your cloud security needs. You can’t choose a cloud service provider and work out a cloud security agreement, until you know what works for you. These are not just technology issues. They’re legal issues to be concerned with as well.
Armed with this information you’re ready to work out your security agreements with your cloud provider This should be nailed down in your service level agreement.
Finally, no matter what’s in the contracts, you and your security staff must make sure your cloud-based data and services are as secure as possible. After all, it’s your data, your job, and it’s you that your customers and stock-holders will look at first if something goes wrong.