Containers are powerful, but also a challenge to secure. Here's how to protect your containers and their underlying infrastructure throughout the development pipeline.
Like many new technologies, containers weren't built with security risks in mind. Since the inception of containers, security has been in catch-up mode, which hasn't been easy. As Red Hat notes, "Container security involves defining and adhering to build, deployment, and runtime practices that protect a Linux container—from the applications they support to the infrastructure they rely on."
Protecting containers can be challenging. Not only do containers offer an enormous attack surface to attackers, the stakes are higher than in many other attacks, because breaching a container is the equivalent of an operating system breach on a virtual machine. With attacks today going beyond vulnerabilities alone — and including malware payloads, software signing, and secrets — it's important to think holistically about your container security.
Here are seven best practices to help you keep your containers secure across the entire software development lifecycle.
[ Get key takeaways from a survey of 300+ security professionals on software security. Plus: Download the report: Flying Blind: Firms Struggle to Detect Software Supply Chain Attacks ]
1. Scan your container images
The layers of files that represent a container image can include software that contains vulnerabilities. Scanning the image can reveal those vulnerabilities. If you're using Docker, you can scan an image using the Docker scan command. You'll also want to identify vulnerabilities as early as possible in the development lifecycle, so it's wise to embed scanning into the CI/CD pipeline. Scanning for operating system vulnerabilities is also important in securing a container.
Lisa Azevedo, CEO of the container security company Containn, explained in an interview that scanning images is a reactive approach. Scanning identifies vulnerabilities and doesn't fix the problem. If you use intelligence to build secure images from the start, you don't have to scan the manual, expired images to look for vulnerabilities, malware, software signing, and secrets since you built secure images from the start, she said.
"The challenge with manual images is the complexity, the ever-changing compliance, and security requirements. Using intelligence always allows you to have the most up-to-date images that are addressing critical updates, security vulnerabilities, best practices and compliance requirements."
—Lisa Azevedo
2. Secure your registries
A container registry is a place where developers can save, access, and share their container images. API paths can also be stored in registries and access-control parameters for container-to-container communication. Registries can be secured by using access controls to keep a leash on who can access and publish images, as well as be a barrier to unauthorized parties from gaining access to the registries. Requiring images to be signed is the first step to guarantee integrity. Signatures make it difficult to substitute a compromised image for an authentic one.
But if a container image is compromised before being signed, best practice, or the signing process is compromised, then you are unknowingly distributing malware. That's why you need to be able to check that your container images, signing process etc. are all behaving like they should, and have not been compromised.
Shaun O'Meara, a field CTO for Mirantis, an open cloud infrastructure company, added that container registries can also be used to counter DDoS attacks on containerized applications.
"Containers support massive scalability. As such, it is theoretically possible to scale a containerized application to the point that it is able to absorb the flood of requests generated by the attack, while also continuing to service legitimate requests."
—Shaun O'Meara
While, Brien Posey, writing for Sweetcode, acknowledged containers could be used to fend off a DDoS attack, the consequences could be risky, "[R]elying solely on container scalability is more likely to touch off something of an arms race," he wrote.
"Attackers know that containers are ridiculously scalable, and therefore build bigger botnets in an effort to overcome the limits of scalability."
—Brien Posey
3. Secure your container deployment
When securing your container deployment, make sure to secure your target environment. That can be done in a number of ways, such as hardening the operating system of your underlying host, setting up firewall and VPC rules, and setting up special limited-access accounts. Using an orchestration program can also be a wise move. Those programs often provide secure API endpoints and role-based access control, which can prevent unauthorized access to your deployment.
"It's important to continuously scan for vulnerabilities and misconfigurations in software before deployment, and block deployments that fail to meet security requirements," said Ratan Tipirneni, president and CEO of Tigera, a provider of security and observability for containers, Kubernetes and cloud.
Tipirneni recommended assessing container and registry image vulnerabilities by scanning first- and third-party images for vulnerabilities and misconfigurations, and using a tool that scans multiple registries to identify vulnerabilities from databases such as the National Vulnerability Database (NVD).
"You also need to continuously monitor images, workloads, and infrastructure against common configuration security standards, such CIS Benchmarks. This enables you to meet internal and external compliance standards, and also quickly detect and remediate misconfigurations in your environment, thereby eliminating potential attack vectors."
—Ratan Tipirneni
4. Secure your containers during runtime
During runtime—as with other times— it's wise to apply the principle of least privilege to your container scheme. Communication should be conducted only between containers that need it. Only ports that serve an application—other than SSH—should be exposed. That should be applied to containers, as well as any underlying virtual machines. It's also wise to use TLS to secure communication between services and the Docker image policy plugin to prevent unauthorized access to images, Tipirneni recommended.
"To secure containers at runtime, you should monitor and detect anomalies in network traffic, file activity, process behavior, and system calls across your workloads for broad visibility into runtime threats."
—Tipirneni
He also advised organizations to assess their containerized workloads against Indicators of Compromise and Indicators of Attack (ICIA) for known malicious activity, and use machine learning to implement a behavioral-based approach to protect against zero-day threats.
Use a tool that allows you to create security policies to block or quarantine compromised workloads in addition to sending security alerts to your security operations center (SOC) for further analysis, he said.
"You also need visibility across the stack from the network layer to the application layer, so use a tool that gives a runtime view of the workloads in your environment with context on how they are operating and communicating. This will also allow for faster troubleshooting of performance hotspots and connectivity issues."
—Tipirneni
5. Reduce your attack surface
Remember that containers are designed to be ephemeral and lightweight. They're not servers. You shouldn't be constantly adding files to a container nor should you be updating it infrequently over a long period of time. If you do, you'll be creating a large attack surface for threat actors. Strive to minimize the number of components in each container and to make them as thin as possible. In addition, vulnerabilities identified in images should be corrected quickly and new images should be deployed in a new, clean container. Finally, an SBOM can ensure that all components are spelled out and therefore observable.
Another move that can shrink the attack surface of your container setup is to create separate virtual networks for your containers, which creates a valuable level of isolation for them.
"Microsegmentation can be used to isolate workloads based on environment, application tier, compliance needs, user access, and individual workload requirements."
—Tipirneni.
He added that organizations should also focus on threat prevention using zero-trust controls. For example, implementing granular, zero-trust workload access controls, such as DNS policies or NetworkSets, to control the flow of data between workloads and external resources.
6. Use modern container security tools
There are native security management tools in container orchestration programs, like Kubernetes, that can be helpful in keeping containers secure, but they alone aren't sufficient to ensure the health of containerized applications and third-party software components.
For deployments done in Kubernetes environments, O'Meara noted that some useful scanners available include Checkov and Kubesec. Checkov is used to prevent cloud misconfigurations during build time for Kubernetes, Terraform, and other infrastructure-as-code languages. Kubesec is used to validate the configuration and the manifest files used for Kubernetes cluster deployment and operations. Other tools include Anchor Engine, for scanning container images, and Dockle, for making sure a Dockerfile has been written according to best security practices.
This is another key area for software security teams to think beyond vulnerabilities, monitoring for malware injections, and ensuring the security of software signing and secrets.
7. Monitor your container activity
Containerized workloads require a granular level of monitoring to provide IT and security teams visibility into elements running inside their environments. "Because containerized workloads are highly dynamic, issues can quickly propagate across multiple containers and applications—so it is critical to swiftly identify and mitigate each issue at the source through the creation of security policies to isolate compromised containers," Tipirneni explained.
"You need to monitor container activity. Monitoring tools can help you identify anomalous behavior and respond to events in a timely fashion."
—Tipirneni
Best practices and modern tools are key
Containers create a challenging environment for security experts and administrators. Container environments contain workloads that are more complex than those found in traditional environments. Production environments can include massive numbers of containers.
By adhering to a set of best practices, using modern software supply chain security tools, and taking your security regimen beyond vulnerabilities to the other ways your software can be compromised, you can protect your containers and their underlying infrastructure throughout the development pipeline.
Keep learning
- Download the free report: Flying Blind: Firms Struggle to Detect Software Supply Chain Attacks
- See interactive sample reports to help your team stop software supply chain attacks
Keep learning
- Get up to speed on securing AI/ML systems and software with our Special Report. Plus: See the Webinar: The MLephant in the Room.
- Find the best building blocks for your next app with RL's Spectra Assure Community, where you can quickly search the latest safe packages on npm, PyPI and RubyGems.
- Learn how you can go beyond the SBOM with deep visibility and new controls for the software you build or buy. Learn more in our Special Report — and take a deep dive with our white paper.
- Commercial software risk is under-addressed. Get key insights with our Special Report, download the related white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.