Another day, another data breach — thanks to misconfigured cloud-based systems. This summer’s infamous Capital One breach is the most prominent recent example. The breach resulted from a misconfigured open-source web application firewall (WAF), which the financial services company used in its operations that are hosted on Amazon Web Services (AWS).
The misconfigured WAF was apparently permitted to list all the files in any AWS data buckets and read the contents of each file. The misconfiguration allowed the intruder to trick the firewall into relaying requests to a key back-end resource on AWS, according to the Krebs On Security blog. The resource “is responsible for handing out temporary information to a cloud server, including current credentials sent from a security service to access any resource in the cloud to which that server has access,” the blog explained.
The breach impacted about 100 million US citizens, with about 140,000 Social Security numbers and 80,000 bank account numbers compromised, and eventually could cost Capital One up to $150 million.
Here’s a look at why misconfiguration continues to be a common challenge with cloud services, followed by seven cloud security controls you should be using to minimize the risks.
So, how bad is the problem of misconfigured cloud systems? Consider this: By 2022, at least 95% of cloud security failures will be the customer’s fault, Gartner estimates, citing misconfigurations and mismanagement.
“The challenge exists not in the security of the cloud itself, but in the policies and technologies for security and control of the technology,” according to Gartner. “In nearly all cases, it is the user, not the cloud provider, who fails to manage the controls used to protect an organization’s data,” adding that “CIOs must change their line of questioning from ‘Is the cloud secure?’ to ‘Am I using the cloud securely?’”
A number of factors are at play in creating, and exacerbating, the misconfiguration problem.
The following are seven cloud security controls you should be using.
All cloud services aren’t the same, and the level of responsibility varies. Software-as-a-service (SaaS) providers make sure their applications are protected and that the data is being transmitted and stored securely, but that’s not always the case with IaaS environments. For example, an enterprise has complete responsibility over its AWS Elastic Compute Cloud (EC2), Amazon EBS and Amazon Virtual Private Cloud (VPC) instances, including configuring the operating system, managing applications, and protecting data.
In contrast, Amazon maintains the operating system and applications for S3, and the enterprise is responsible for managing the data, access control and identity policies. Amazon provides the tools for encrypting the data for S3, but it’s up to the organization to enable the protection as it enters and leaves the server.
Double-check with your IaaS providers to understand who’s in charge of each cloud security control.
Enterprises are struggling to control who has access to their cloud services. For example, over half (51%) of organizations have publicly exposed at least one cloud storage service by accident, such as AWS S3 storage drives, according to May 2018 research from RedLock’s Cloud Security Intelligence (CSI) team. (RedLock is now part of Palo Alto Networks.) This is despite warnings from Amazon and other cloud providers to avoid allowing storage drive contents to be accessible to anyone with an internet connection.
Generally speaking, only load balancers and bastion hosts should be exposed to the internet. Many administrators mistakenly enable global permissions on servers by using 0.0.0.0/0 in the public subnets. The connection is left wide open, giving every machine the ability to connect.
Another common mistake is allowing Secure Shell (SSH) connections directly from the internet, which means anyone who can figure out the server location can bypass the firewall and directly access the data. In 2019, Palo Alto Networks’ Unit 42 threat research team searched for exposed services in the public cloud. Of the exposed hosts and services it found, 32% had open SSH services. “Although SSH is one of the most secure protocols, it is still too risky to expose this powerful service to the entire internet,” the report states. “Any misconfiguration or weak/leaked credentials can lead to host compromise.”
Major cloud providers all offer identity and access control tools; use them. Know who has access to what data and when. When creating identity and access control policies, grant the minimum set of privileges needed and temporarily grant additional permissions as needed. Configure security groups to have the narrowest focus possible; use reference security group IDs where possible. Consider tools such as CloudKnox that let you set access controls based on user activity data.
Another common mistake is to leave data unencrypted on the cloud. Voter information and sensitive Pentagon files have been exposed because the data wasn’t encrypted and the servers were accessible to unauthorized parties. Storing sensitive data in the cloud without putting in place appropriate controls to prevent access to a server and protecting the data is irresponsible and dangerous.
Where possible, maintain control of the encryption keys. While it’s possible to give cloud service providers access to the keys, the responsibility of the data lies with the organization.
Even when cloud providers offer encryption tools and management services, too many companies don’t implement it. Encryption is a fail-safe — even if a security configuration fails and the data falls into the hands of an unauthorized party, the data can’t be used.
As the 2017 OneLogin breach showed, it’s not uncommon for AWS access keys to be exposed. They can be exposed on their public websites, source code repositories, unprotected Kubernetes dashboards, and other such forums. Treat AWS access keys as the most sensitive crown jewels, and educate developers to avoid leaking such keys in public forums.
Create unique keys for each external service and restrict access following the principle of least privilege. Make sure the keys don’t have broad permissions. In the wrong hands, they can be used to access sensitive resources and data. Create IAM roles to assign specific privileges, such as making API calls.
Make sure to regularly rotate the keys, to avoid giving attackers time to intercept compromised keys and infiltrate cloud environments as privileged users.
Don’t use the root user account, not even for administrative tasks. Use the root user to create a new user with assigned privileges. Lock down the root account (perhaps by adding multi-factor authentication [MFA]) and use it only for specific account and service management tasks. For everything else, provision users with the appropriate permissions.
Check user accounts to find those that aren’t being used and then disable them. If no one is using those accounts, there’s no reason to give attackers potential paths to compromise.
Defense-in-depth is particularly important when securing cloud environments because it ensures that even if one control fails, other security features can keep the application, network, and data safe.
MFA provides an extra layer of protection on top of the username and password, making it harder for attackers to break in. MFA should be enabled to restrict access to the management consoles, dashboards, and privileged accounts.
Major cloud providers all offer some level of logging tools, so make sure to turn on security logging and monitoring to see unauthorized access attempts and other issues. For example, Amazon provides CloudTrail for auditing AWS environments, but too many organizations don’t turn on this service. When enabled, CloudTrail maintains a history of all AWS API calls, including the identity of the API caller, the time of the call, the caller’s source IP address, the request parameters, and the response elements returned by the AWS service. It can also be used for change tracking, resource management, security analysis and compliance audits.
The shift-left movement advocates incorporating security considerations early into the development process versus adding security in the final stages of development. “Not only should enterprises monitor what they have in IaaS platforms, they should be checking all their code that’s going into the platform before it goes live,” says McAfee’s Flaherty. “With shift-left, you’re auditing for and catching potential misconfigurations before they become an issue.” Look for security tools that integrate with Jenkins, Kubernetes and others to automate the auditing and correction process.
Shifting left isn’t enough, however, notes Sam Bisbee, CSO for Threat Stack. “Yes, you should scan code and perform configuration checks before going into production, but too often, people forget to check that the workloads are compliant once they’re put into production,” Bisbee says. “If I scan and then deploy my code, it may be OK based on what I knew at the time. But workloads stay in production for months and years, new vulnerabilities are discovered, and over time, the risk in your code increases. If you’re not continuously monitoring, you won’t be protected.”
Rather than always looking for known threats, as many cybersecurity professionals have been trained to do, you should also strive to understand your enterprise’s complete infrastructure and what’s running on it, Bisbee advises.
Admittedly, that can be challenging in today’s increasingly complex multi-cloud environments. “But it’s far easier to understand how something should behave and then see when it changes than it is to constantly play Whack-a-Mole with intruders. If you have a complete picture of your environment and you know what to expect, you can more effectively detect threats such as misconfigurations and proactively remediate the risks. Ultimately, security is about visibility, not control.”