In many ways, age brings refinement. Wine, cheese, and, in some cases, people, all improve as they grow older. But in the world of enterprise IT, age has a different connotation. Aged systems and software, can bring irrelevance and technical debt and, at worst, increased security risks. With the rise of Linux containers as a functional underpinning to the digitally-transforming enterprise, the ill effects of technological age are front and center.

To think of it more simply: Containers age like milk, not like wine. Think of it in terms of food: Milk is a key component in cooking, from baking to sauces. If the milk sours or goes bad, so to does the recipe. The same things happens to containers, especially as they are being looked to as key components for production systems. A stale or “soured” container could ruin an otherwise promising deployment.

“Old age” in container terms could mean a few weeks, certainly months - that’s enough time for security vulnerabilities, software patches, and other critical updates to pile up, making an older container-based application potentially unstable and not fit for production. In complex enterprise IT environments, a system is only as secure as its weakest link, meaning that an outdated container could become a springboard for malicious actors to take down critical workloads, steal data, or worse.

For example, in March 2018, cybersecurity company Tenable analyzed thousands of images in a popular community container image repository. On average, community-curated images came back with nearly 40 vulnerabilities. Even worse, 34 percent of these vulnerabilities were critical, which can include the Heartbleed and Shellshock vulnerabilities. These images are, in a sense, acting as a vulnerability time capsule, pushing long-thought fixed issues back into (potentially) production systems.

The question then becomes: How do I rejuvenate my old containers? The answer is distressingly simple. You don’t.

Container obsolescence is okay

Operating containerized applications needs to factor in that container images aren’t meant to be nurtured and cared for long-term; they serve a singular purpose and, when that purpose is achieved or the container no longer meets that purpose, they are redacted. The very nature of containers themselves means that patching and updating as we traditionally think of it is not possible. Instead, new containers are rolled out to replace the old ones, with updated primitives to better address the changing software ecosystem.

It’s important to look at containerized workloads as an assembly line - you toss out the ones that don’t look good or are “off” in some way, as you can more easily make new ones to roll out. But you can’t replace containers that have outdated components or otherwise don’t meet your needs if you don’t KNOW that they have these characteristics. While Linux containers are inherently open source, they aren’t readily transparent; the amount of metadata and supporting technologies surrounding containerized applications can easily obfuscate their inner workings and, most importantly, the age of all the components inside.

Container security: It’s an ecosystem, not a job

Security, especially container security, takes a village - or at least an ecosystem - to properly address. Firstly, the container must be built with the proper security precautions in mind, but that’s only one step. The developer or the ISV creating the containerized application needs to take the proper steps to deliver a vulnerability-free application at creation, just as they would with a traditional software package.

When the application is passed to the operations team for deployment, they need to enable the proper security controls around not only the container itself, but also the container host. As we’ve said previously, a container stack is only as secure as its weakest element. Flaws in the host operating system, thanks to a shared kernel, can have a cascading effect of security issues through deployment operations if not properly closed.

But deployment does not conclude the security needs of containers. Operations and security teams must then watch forthcoming errata and vulnerability announcements and assess which, if any, of their containerized applications are at risk. If they find vulnerable containers, then the cycle repeats, with rebuilt -- not patched -- containerized applications to mitigate the issue. It’s a cycle, and one that must be maintained, securely, for containers to be free from known vulnerabilities.

Simply put, in the container world, it’s the job of the container vendor (whether an external ISV or an internal development shop), not the end users, to incorporate security updates, continuously and typically fairly frequently.

The foundation matters

Along with age, end users tend to take the underpinnings of containers - the operating system - for granted. While containers only “contain” the necessary components of the host operating system to work, they all share the same kernel. Every Linux container starts with a Linux base layer, which means that every ISV building container images is distributing Linux content. For these containers to be used in production environments, this content needs to be free from known vulnerabilities. An operating system with known vulnerabilities could compromise an entire container production line, regardless of the health of the actual containers themselves.

A container platform, built from tried and true enterprise technologies, can address these concerns through a single technology layer. Beyond providing a more secure and stable foundation, this platform should also provide Kubernetes for container orchestration, automated deployment/rollback of images, and a method of managing multi-tenancy. Additional capabilities should also encompass container lifecycle management, storage, networking, registry, logging, and more...a container platform should not be a one-trick pony. Altogether, this provides a single trusted platform for container efforts and a mechanism for helping to keep production deployments running as they should.

Chaos into certainty

At face value, containers are simple - they’re an application or process packaged with all of the necessary dependencies. But it’s never that simple with enterprise IT. As noted earlier, surrounding each container is a host of metadata, from the time of creation and who created it to what registry it came from and when it was deployed. It’s not unlike a manifest for a physical shipping container, but far more complex.

This metadata is so complex and so hard to collate, especially at scale, that it’s possible that some organizations would just ignore it. The importance of this information cannot be overstated - without it, it’s possible for IT teams to miss critical signs that a containerized application is missing a vital update or, worse, holds out-dated and potentially vulnerable components. But in organizations that may develop, deploy, and redeploy thousands of containers on a regular cadence, it ends up being a risk assessment activity versus standard practice.

But it doesn’t have to be.

Red Hat’s answer: Clearer data for fewer “black boxes”

At Red Hat Summit 2017, we announced the industry’s first Container Health Index, which consolidates the disparate metadata around container images and provides a single, easy-to-understand “freshness grade” of the image itself. It takes into account the containers age, unapplied security errata, and more, all driven by Red Hat’s extensive expertise in securing and bringing more stable open source technologies to the enterprise. Since its introduction, the Container Health Index has not been static; we’ve been evolving and building the program to reflect a world where containers are not “one-hit wonders” and where Kubernetes-scale deployments oversee tens, if not hundreds, of thousands of production images.

When paired with Red Hat Enterprise Linux, the world’s leading enterprise Linux platform, and Red Hat OpenShift, Red Hat’s enterprise-grade Kubernetes platform, Red Hat not only provides a cleaner view into the health of containerized applications, we also provide the technologies to keep your cloud-native stack more secure. Linux containers represent a powerful path forward to digital transformation, but no matter how evolved enterprise IT becomes, security remains front and center. Through Linux container innovations and a commitment to providing more secure technologies at every layer of the enterprise technology footprint, Red Hat stands ready to be your trusted software partner, whether you’re dipping your toes in the water or deploying Linux containers into production.

Lars Herrmann is general manager, Workload Strategy for Cloud Platforms at Red Hat.


About the author

Lars Herrmann is always found at the forefront of technology. From the early days of Linux to today’s digital transformation built on hybrid cloud, containers and microservices, Lars has consistently helped enterprises leverage open source technologies to drive business results. At Red Hat, Lars leads Red Hat Partner Connect, Red Hat's technology partner program. His team is responsible for Red Hat technology certification offerings, technical partner engagement and early adopter programs to drive technology and business initiatives spanning Red Hat's product portfolio. In prior roles, Lars led Red Hat’s business strategy, technology roadmap and go-to-market for Linux, virtualization, containers, and portfolio integration initiatives. A German native, Lars now lives in Boston with his wife and children.

Read full bio