As more organizations shift data and workloads to the cloud, many are relying on containers—units of software that package code and its dependencies so that applications run reliably when moving from one computing environment to another. Containerization is heralded as a robust technology for deploying applications and services in a secure manner, says Cole McKnight, cloud architect in the Genetics and Biochemistry Department at Clemson University.
Container engines such as Docker and Singularity provide a way to implement and distribute the best-practice security policies for a given application, in lieu of relying on individual users to configure a secure installation, McKnight says. “Container orchestration platforms such as Kubernetes, Mesos or Docker Swarm have integrated security mechanisms that are specific to deploying and executing containers,” McKnight says. “The result is an easily configurable ecosystem for developing and deploying containers.”
While these technologies abstract a lot of the complexity traditionally involved in delivering secure applications and services, some development teams interpret this possibility of security as a guarantee, McKnight says. The problem is, container implementation is not foolproof, and the mistakes teams make when using them can create rather than address security issues.
“The most common mistake when implementing secure containers is to focus solely on the container itself,” McKnight says. Maintaining best practices for the security of an image is important, he says, but developers commonly focus heavily on the security of an image without considering the execution environment.
“No amount of security inside a container can protect it from the exploitation of its host,” McKnight says. “Each machine that is hosting a container engine must be secured at each layer from any traditionally exploitable vulnerabilities.”
The container engine and container orchestration platform, if applicable, must be configured to correctly use the integrated container security mechanisms, McKnight says. “So, container security starts with the operating system and network of the host,” he says.
When deploying containers, some organizations make the mistake of including code libraries and assuming they are safe, says Tony Asher, an independent cybersecurity consultant. “This includes libraries [in] the development suite,” Asher says. “And even more critical are third-party libraries [that are] often imported to accelerate development.”
The security issue is that vulnerabilities are potentially within these application code libraries, Asher says. “Compiling applications and launching them into production containers can introduce serious risks through vulnerability exploits.”
To address this, Asher advises companies to limit the libraries to what is required for the application container to meet it’s criteria for success, scan code for vulnerabilities, and apply a security review process when considering importing third-party libraries.
Organizations also need to develop a formal secure architecture review process. “This process should include reviewing containers that meet risk criteria to be reviewed by a group of people,” Asher says. This provides accountability to help ensure risks have been considered.
It’s common to give containers too much privilege, which attackers can abuse to leverage resources that a container shouldn’t have access to but does, says Jay Leek, managing partner at venture capital firm ClearSky. “Apply the principle of least privilege here, but do runtime behavioral monitoring to help ensure that abuses of any necessary application privileges are detected,” Leek says.
A common practice is to run containers as privileged within the execution environment, McKnight says. “Depending on the software stack of the host, this can mean different things,” he says. “But giving containers unnecessary privileges within the host environment can lead to escalations that not only result in the container being compromised, but also the host machine.”
Just as no amount of security inside a container can protect it from the exploitation of its host, no amount of security inside a host can protect it from the exploitation of a privileged container. “A container should be designed to run in a way that does not provide it with unnecessary privileges in the host environment,” McKnight says.
When privileges are needed, they should be given out sparingly with a fine granularity, McKnight says. “The best practice is to avoid provisioning containers with sweeping permissions within the host environment.”
Similarly, containers that need to be exposed to public networks when they are executed need to be designed with the same mindset. “Instead of sweeping policies that expose the container to potential attacks, only absolutely necessary channels should be opened,” McKnight says.
Numerous considerations need to be made when implementing the container itself. “Containers are built through a series of commands that are defined in the image specification and run with root permissions when the image is built,” McKnight says. “Developers commonly make the mistake of leaving these permissions intact when the container is deployed and executed.”
If a process within a container that’s being run with root permissions is exploited at runtime, the data and software inside that container will be compromised. To address this, the commands to be run inside a container should be run by non-root user without permissions, when possible, to avoid any privilege escalations within the container.
On the networking side, the ways the data and processes of a container are exposed to other entities need to be carefully considered. “Once again, container security begins with traditional operating system and network security,” McKnight says. “Any interaction between the container and exterior volumes, networks and processes must be reviewed.”
Yet another factor that organizations commonly overlook when deploying containers is the image they’re based on. “Teams routinely make the mistake of not properly vetting an image developed by another party before integrating it into their solution,” McKnight says.
Before deploying a container from a public registry or using it as a base image, scan it for malware and vulnerabilities. In addition, organizations should have an experienced developer thoroughly review the image for unnecessary vulnerabilities, McKnight says.
“Assuming that images pushed to a public registry are secure can be very dangerous, especially when building additional images off of them,” McKnight says.
An immutable image is one that doesn’t change, Asher notes. “This is a principle of Docker, Kubernetes and other container solutions,” he says. “When deploying systems and data over the internet, which is an untrusted medium, you need to create a process that ensures integrity.”
Immutable images offer several benefits, such as being predictable, saleable, and delivering automatic recovery. They also provide integrity, Asher says, which is one of the core purposes of security.
“When production containers do not follow the immutable principle, application support can connect to them and make changes,” Asher says. “This behavior raises multiple security red flags. Specifically, it removes the integrity of the container.”
One of the most concerning risks is a malicious actor modifying the container to include malicious code. This can cause a material impact on a company, Asher says. Monitoring the integrity of containers can greatly reduce this risk.
“Improve and correct the deployment pipeline to prevent changes to production containers,” Asher says. “Ensure changes are being made in [quality assurance] and test environments, [that] they are being approved, and then new immutable images are deployed that replace the old ones.”