Bloffin Technologies > News > Blog > 7 cloud security controls you should be using

7 cloud security controls you should be using

7 cloud security controls you should be using

Human error is one of the top reasons for data breaches in the cloud, as administrators forget to turn on basic security controls. Whether it is Amazon Web Services, Microsoft Azure, or Google Cloud Platform, keep these rules in mind to secure your cloud workloads.

Another day, another data breach — thanks to misconfigured cloud-based systems. This summer’s infamous Capital One breach is the most prominent recent example. The breach resulted from a misconfigured open-source web application firewall (WAF), which the financial services company used in its operations that are hosted on Amazon Web Services (AWS).

The misconfigured WAF was apparently permitted to list all the files in any AWS data buckets and read the contents of each file. The misconfiguration allowed the intruder to trick the firewall into relaying requests to a key back-end resource on AWS, according to the Krebs On Security blog. The resource “is responsible for handing out temporary information to a cloud server, including current credentials sent from a security service to access any resource in the cloud to which that server has access,” the blog explained.

The breach impacted about 100 million US citizens, with about 140,000 Social Security numbers and 80,000 bank account numbers compromised, and eventually could cost Capital One up to $150 million.

Here’s a look at why misconfiguration continues to be a common challenge with cloud services, followed by seven cloud security controls you should be using to minimize the risks.

Misconfiguration is a serious problem likely to get worse

So, how bad is the problem of misconfigured cloud systems? Consider this: By 2022, at least 95% of cloud security failures will be the customer’s fault, Gartner estimates, citing misconfigurations and mismanagement.

“The challenge exists not in the security of the cloud itself, but in the policies and technologies for security and control of the technology,” according to Gartner. “In nearly all cases, it is the user, not the cloud provider, who fails to manage the controls used to protect an organization’s data,” adding that “CIOs must change their line of questioning from ‘Is the cloud secure?’ to ‘Am I using the cloud securely?’”

A number of factors are at play in creating, and exacerbating, the misconfiguration problem.

  • Misconceptions and assumptions. It’s too often assumed that the cloud service provider is in charge of securing the cloud environment. That’s only part of the story. Infrastructure as a service (IaaS) providers such as Amazon, Microsoft and Google take care of security for their physical data centers and the server hardware the virtual machines run on. The customer is in charge of protecting its virtual machines and applications. Cloud providers offer security services and tools to secure customer workloads, but the administrator has to actually implement the necessary defenses. It doesn’t matter what kind of security defenses the cloud provider offers if customers don’t protect their own networks, users and applications.
  • A disconnect between perception and reality. Many breaches have occurred in IaaS environments that don’t fit the familiar “infiltrate with malware” method, a September 2019 McAfee survey of 1,000 enterprises in 11 countries finds. In most cases, the breach “is an opportunistic attack on data left open by errors in how the cloud environment was configured.”
    Along with its survey, McAfee examined its customers’ anonymized, aggregated event data across millions of cloud users and billions of events. The data shows a worrisome disconnect between the misconfigurations that companies using IaaS environments are aware of and those that escape their attention. Survey respondents say they are aware of 37 misconfiguration incidents on average per month, but McAfee’s customer data shows that those enterprises actually experienced about 3,500 misconfiguration incidents per month — a year-over-year increase of 54%. In other words, 99% of misconfigurations in enterprise IaaS environments go unnoticed, according to McAfee.
  • Plenty of tools to identify and exploit misconfigured cloud services. According to Symantec’s 2019 Internet Threat Report, in 2018 “(AWS) S3 buckets emerged as an Achilles heel for organizations, with more than 70 million records stolen or leaked as a result of poor configuration. There are numerous tools widely available which allow potential attackers to identify misconfigured cloud resources on the internet. Unless organizations take action to properly secure their cloud resources, such as following the advice provided by Amazon for securing S3 buckets, they are leaving themselves open to attack.”
  • Increasingly complex enterprise IT environments. The growing adoption of multi-cloud environments among enterprises, coupled with a lack of complete awareness of all the cloud services in use at an enterprise, is exacerbating the misconfiguration problem, according to McAfee. In its recent study, 76% of enterprises reported having a multi-cloud environment, but an examination of customer data found that actually 92% of those environments are multi-cloud, an increase of 18% year over year.
  • While multi-cloud environments have advantages, they can also become complicated to administer, manage and control. “Security practitioners responsible for securing data in IaaS platforms are constantly playing catch up, and they don’t have an automated way to monitor and automatically correct misconfigurations across all the cloud services,” says Dan Flaherty, McAfee director of product marketing.
    What’s more, the heated competition in the growing IaaS market means Amazon, Microsoft and Google are furiously adding new features to their respective offerings. “AWS alone has added about 1,800 features this year, compared to about 28 features the first year it launched,” notes John Yeoh, global vice president of research for the Cloud Security Alliance. Thus, it’s challenging for security practitioners to keep up with the rapid pace of new features and functions, which in turn can lead to misconfigurations. “In a complex multi-cloud environment, you need an expert for every single platform or service you’re using to ensure that the appropriate security measures are in place,” Yeoh says.
    In addition, recent cloud advances such as serverless applications and architectures, Kubernetes containerized workloads and services and the increased use of application programming interfaces (APIs) linking various cloud services can increase the potential for misconfigurations if precautions aren’t taken and access privileges aren’t constantly monitored and adjusted, notes Balaji Parimi, CEO of CloudKnox Security. “People are just beginning to understand the dangers of these newer cloud technologies and trends,” he adds. “Too often, they’re applying to these new technologies decades-old security methodologies based on static roles and assumptions about access privileges.”
    The bottom line: Increasingly complex IT environments are making it more difficult to implement simple security controls across the environment that could help identify and prevent misconfigurations, says Yeoh.

The following are seven cloud security controls you should be using.

1. Know what you’re responsible for

All cloud services aren’t the same, and the level of responsibility varies. Software-as-a-service (SaaS) providers make sure their applications are protected and that the data is being transmitted and stored securely, but that’s not always the case with IaaS environments. For example, an enterprise has complete responsibility over its AWS Elastic Compute Cloud (EC2), Amazon EBS and Amazon Virtual Private Cloud (VPC) instances, including configuring the operating system, managing applications, and protecting data.

In contrast, Amazon maintains the operating system and applications for S3, and the enterprise is responsible for managing the data, access control and identity policies. Amazon provides the tools for encrypting the data for S3, but it’s up to the organization to enable the protection as it enters and leaves the server.

Double-check with your IaaS providers to understand who’s in charge of each cloud security control.

2. Control who has access 

Enterprises are struggling to control who has access to their cloud services. For example, over half (51%) of organizations have publicly exposed at least one cloud storage service by accident, such as AWS S3 storage drives, according to May 2018 research from RedLock’s Cloud Security Intelligence (CSI) team. (RedLock is now part of Palo Alto Networks.) This is despite warnings from Amazon and other cloud providers to avoid allowing storage drive contents to be accessible to anyone with an internet connection.

Generally speaking, only load balancers and bastion hosts should be exposed to the internet. Many administrators mistakenly enable global permissions on servers by using 0.0.0.0/0 in the public subnets. The connection is left wide open, giving every machine the ability to connect.

Another common mistake is allowing Secure Shell (SSH) connections directly from the internet, which means anyone who can figure out the server location can bypass the firewall and directly access the data. In 2019, Palo Alto Networks’ Unit 42 threat research team searched for exposed services in the public cloud. Of the exposed hosts and services it found, 32% had open SSH services. “Although SSH is one of the most secure protocols, it is still too risky to expose this powerful service to the entire internet,” the report states. “Any misconfiguration or weak/leaked credentials can lead to host compromise.”

Major cloud providers all offer identity and access control tools; use them. Know who has access to what data and when. When creating identity and access control policies, grant the minimum set of privileges needed and temporarily grant additional permissions as needed. Configure security groups to have the narrowest focus possible; use reference security group IDs where possible. Consider tools such as CloudKnox that let you set access controls based on user activity data.

3. Protect the data 

Another common mistake is to leave data unencrypted on the cloud. Voter information and sensitive Pentagon files have been exposed because the data wasn’t encrypted and the servers were accessible to unauthorized parties. Storing sensitive data in the cloud without putting in place appropriate controls to prevent access to a server and protecting the data is irresponsible and dangerous.

Where possible, maintain control of the encryption keys. While it’s possible to give cloud service providers access to the keys, the responsibility of the data lies with the organization.

Even when cloud providers offer encryption tools and management services, too many companies don’t implement it. Encryption is a fail-safe — even if a security configuration fails and the data falls into the hands of an unauthorized party, the data can’t be used.

4. Secure the credentials 

As the 2017 OneLogin breach showed, it’s not uncommon for AWS access keys to be exposed. They can be exposed on their public websites, source code repositories, unprotected Kubernetes dashboards, and other such forums. Treat AWS access keys as the most sensitive crown jewels, and educate developers to avoid leaking such keys in public forums.

Create unique keys for each external service and restrict access following the principle of least privilege. Make sure the keys don’t have broad permissions. In the wrong hands, they can be used to access sensitive resources and data. Create IAM roles to assign specific privileges, such as making API calls.

Make sure to regularly rotate the keys, to avoid giving attackers time to intercept compromised keys and infiltrate cloud environments as privileged users.

Don’t use the root user account, not even for administrative tasks. Use the root user to create a new user with assigned privileges. Lock down the root account (perhaps by adding multi-factor authentication [MFA]) and use it only for specific account and service management tasks. For everything else, provision users with the appropriate permissions.

Check user accounts to find those that aren’t being used and then disable them. If no one is using those accounts, there’s no reason to give attackers potential paths to compromise.

5. Security hygiene still matters 

Defense-in-depth is particularly important when securing cloud environments because it ensures that even if one control fails, other security features can keep the application, network, and data safe.

MFA provides an extra layer of protection on top of the username and password, making it harder for attackers to break in. MFA should be enabled to restrict access to the management consoles, dashboards, and privileged accounts.

6. Improve visibility 

Major cloud providers all offer some level of logging tools, so make sure to turn on security logging and monitoring to see unauthorized access attempts and other issues. For example, Amazon provides CloudTrail for auditing AWS environments, but too many organizations don’t turn on this service. When enabled, CloudTrail maintains a history of all AWS API calls, including the identity of the API caller, the time of the call, the caller’s source IP address, the request parameters, and the response elements returned by the AWS service. It can also be used for change tracking, resource management, security analysis and compliance audits.

7. Adopt a shift-left approach to security 

The shift-left movement advocates incorporating security considerations early into the development process versus adding security in the final stages of development. “Not only should enterprises monitor what they have in IaaS platforms, they should be checking all their code that’s going into the platform before it goes live,” says McAfee’s Flaherty. “With shift-left, you’re auditing for and catching potential misconfigurations before they become an issue.” Look for security tools that integrate with Jenkins, Kubernetes and others to automate the auditing and correction process.

Shifting left isn’t enough, however, notes Sam Bisbee, CSO for Threat Stack. “Yes, you should scan code and perform configuration checks before going into production, but too often, people forget to check that the workloads are compliant once they’re put into production,” Bisbee says. “If I scan and then deploy my code, it may be OK based on what I knew at the time. But workloads stay in production for months and years, new vulnerabilities are discovered, and over time, the risk in your code increases. If you’re not continuously monitoring, you won’t be protected.”

Understand your infrastructure 

Rather than always looking for known threats, as many cybersecurity professionals have been trained to do, you should also strive to understand your enterprise’s complete infrastructure and what’s running on it, Bisbee advises.

Admittedly, that can be challenging in today’s increasingly complex multi-cloud environments. “But it’s far easier to understand how something should behave and then see when it changes than it is to constantly play Whack-a-Mole with intruders. If you have a complete picture of your environment and you know what to expect, you can more effectively detect threats such as misconfigurations and proactively remediate the risks. Ultimately, security is about visibility, not control.”