The use of container technologies for software development and release has proliferated over the past year, heightening the need for organizations to implement updated security controls and processes to mitigate risk.
Researchers at NetRise recently found that the average container contains a startling 604 known vulnerabilities in its underlying software components. Nearly half (45%) of the vulnerabilities that NetRise uncovered were more than 10 years old, and attackers had developed working exploits for at least 4% of the most critical vulnerabilities found.
Misconfigurations and easily avoidable security mistakes were compounding issues. NetRise found that containers had on average at least 4.8 misconfigurations, such as overly permissive access controls and globally readable and writable directories outside the tmp folder. "While containers have changed how many modern applications are designed, deployed, and managed, they appear to be among the weakest cybersecurity links in the software supply chain," NetRise CEO Thomas Pace cautioned in a statement on the report.
Of particular concern is the opaque nature of the software supply chains for organizations globally, NetRise said. "As a starting point, organizations need comprehensive visibility in their software to understand the scope, scale, and related risks."
Many organizations have incorporated a variety of mechanisms to manage container-related cybersecurity risk. Common practices include employing role-based access controls, using zero-trust models, isolating critical workloads, and putting controls in place for protecting against lateral movement and privilege escalation.
In addition to traditional controls, experts stress four essential practices that organizations must implement to protect their container workloads — and the need for new controls to provide a final test for all software running in an organization.
[ See Special Report: Why Complex Binary Analysis Is Critical to SSCS ]
1. Implement new controls to address open-source AI risks
Organizations looking to integrate artificial intelligence capabilities into their software have increasingly begun incorporating open-source AI components into their codebases, mimicking what they have been doing for years with other open-source software. In addition to popular open-source AI frameworks such as TensorFlow, PyTorch, OpenCV, and Keras, a rapidly growing number of unvetted open-source AI libraries and projects are now on Hugging Face, GitHub, and other platforms, and developers are using them in their projects. The rapidly rising use of open-source AI components in software development projects has introduced significant new security and compliance risks for many organizations.
Researchers from Wiz, for instance, found multiple issues with free AI components on Hugging Face earlier this year including one that gave attackers the ability to write to internal container registries. In February, JFrog reported finding as many as 100 AI models on Hugging Face with backdoors in them that could have enabled attackers to access the development environments of organizations using those models.
"Security risks associated with open-source AI components remain a critical concern for organizations, presenting challenges of varying severity," security vendors Anaconda and ETR found in a recent survey of 100 IT decision makers. Almost one-third (32%) of respondents in the survey experienced data leaks — many of them significant in nature — tied to their use of open-source AI components. Nearly as many (30%) reported situations where their AI software generated incorrect and flawed information.
"Open-source AI has risks just as any open-source software has risks, by being community-driven and -developed," said Anthony Tam, manager of security engineering at Tigera. With the new risks that generative AI applications bring to the security community, new frameworks such as SAIF (Secure AI Framework) by Google are a great starting point for providing developers and users of AI models an understanding of how they can be best secured, he said.
A growing array of tools and approaches is also becoming available to help organizations mitigate open-source AI risks. More than six-in-10 of the respondents in the Anaconda and ETR survey, for instance, reported using third-party scanning tools to look for risk in open-source AI components, 57% used open-source models and tools only from reputed and trusted communities, and 53% did manual code reviews before integrating open-source AI components into their codebase.
2. Use just-in-time controls to control container access
Many organizations have implemented role-based access control mechanisms to gate access to their container and development environments. But it's also important to deploy just-in-time mechanisms to ensure that all permissions to access container environments – role-based or otherwise — are temporary and exist only for the duration and scope required for a specific task.
This approach enables quick access to cloud and container environments for employees, contractors, and other third parties while also ensuring the access expires in a timely manner and is no longer available once a specific task has been completed, said Rom Carmel, co-founder and CEO of Apono. The goal is to eliminate the risk from standing access to sensitive resources and minimizing the window of time for potential abuse, Carmel said.
It's best to automate the process using predefined policies to determine who can access which resources, what they can do with that access, and how long they have that access, Carmel said.
"DevOps and security teams can empower developers with the necessary access at the speed and scale they need to be effective, all the while minimizing their active workload."
—Rom Carmel
Analyst firms such as Gartner have long advocated that organizations implement such just-in-time access controls, especially for privileged account management (PAM) purposes. "Privileged access carries significant risk," Gartner noted in its report. "Even with PAM tools in place, the residual risk of users with standing privileges remains high," the analyst firm said in urging identity and access management leaders to implement just-in-time strategies.
3. Use SBOMs to track and manage risky OSS components
A lot of the issues that NetRise uncovered in its report had to do with container complexity. NetRise randomly chose 70 of the most commonly downloaded container images from Docker Hub and found that they contained 389 software components on average. Over 12% of these components lacked metadata related to version numbers, dependencies, and package source. "These 'manifestless' components hinder traditional scanning tools, leaving organizations with visibility gaps that could be exploited by threat actors," NetRise said.
The survey's findings validate what analysts have said for a long time about organizations using software bills of materials (SBOMs) to enable better visibility over all the open-source software and third-party components in their code. Many analysts say that up-to-date SBOMs can help enterprise security and development teams track dependencies, assess risk, and ensure compliance with security and regulatory standards. Growing awareness of SBOMs' usefulness and government-mandated SBOM requirements have led to growing adoption of SBOMs. Gartner has predicted that 60% of organizations will require SBOMs from their vendors when procuring mission-critical software. The other 40% should do so as well.
SBOMs are increasingly effective for open-source software thanks to native support for dependency graphing and SBOM export within platforms such as GitHub, said Michael Skelton, vice president of operations and hacker success at Bugcrowd. "These tools streamline SBOM management, making it easier for teams to track dependencies and vulnerabilities," Skelton said. As more organizations integrate SBOMs into their workflows, we can expect them to become even more effective, especially if efforts around real-time updating and standardization of SBOMs continue to improve, Skelton said.
4. Strengthen your runtime security capabilities
The dynamic and ephemeral nature of container workloads has made it significantly harder for many organizations to track, monitor, and secure them in real time. As a result, runtime security controls have become essential to detect and mitigate threats that manifest during container execution, such as abnormal or malicious activity, privilege escalations, and unauthorized processes.
Organizations can implement several measures to protect container workloads at runtime, said KC Berg, chief architect at StackHawk. To ensure isolation, prevent lateral movement, and detect anomalous behavior, administrators should run containers with the least amount of privileges, file access, and network exposure. Containers should also run only one primary process where possible, and access should be limited to network resources that a particular service might require. Organizations should also consider monitoring their container environment using host-level services that collect and expose performance and security data so tools such as Prometheus and Grafana can provide real-time visibility and insights, Berg said.
"Due to factors such as limited resources, scarcity of specialized skills, rapid development pace, or the sheer complexity of the environment, runtime security may not receive the attention it deserves," Aqua Security said in a recent report. But the reality is that "the interconnectedness and complexity of containers are unique, making the attack surface more dynamic than that of traditional, monolithic applications," Aqua said. "To protect running workloads properly, a solution built for cloud native is required."
Securing your entire SDLC is key
A secure software development process is key to preventing — or at least minimizing — the sorts of issues that could trigger legal exposure under the new rules like the EU's Product Liability Directive, said ReversingLabs chief trust officer Saša Zdjelar. He said organizations should make sure the process includes threat modeling for identifying threats during the design phase, industry-standard coding practices, code reviews, and testing using static and dynamic scans.
With the rapid adoption of containers and the rise of AI-generated code, organizations need to go beyond traditional application security best practices — such as doing static application security testing, dynamic application security testing, and software composition analysis — and bolster their capabilities with advanced technologies such as complex binary analysis that can spot and mitigate software defects, tampering, and more before and after a software product's release, Zdjelar said.
Josh Knox, a senior technical product marketing manager for ReversingLabs, said that no matter how the software is developed or released, complex binary analysis gives organizations a final backstop before pushing code live.
"Organizations need visibility into what a piece of software looks like at the end of the road."
—Josh Knox
Keep learning
- Get up to speed on securing AI/ML systems and software with our Special Report. Plus: See the Webinar: The MLephant in the Room.
- Learn how you can go beyond the SBOM with deep visibility and new controls for the software you build or buy. Learn more in our Special Report — and take a deep dive with our white paper.
- Upgrade your software security posture with RL's new guide, Software Supply Chain Security for Dummies.
- Commercial software risk is under-addressed. Get key insights with our Special Report, download the related white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.