The Evolution Of Kubernetes Workload Patterns
Kubernetes workloads differ. That basic truth should come as no surprise to anybody. We know that K8s sports a variety of workload patterns, each with its own characteristics, based on different use cases. Kube-centric workloads span everything from comparatively simple stateless web applications to complex, stateful distributed systems.
We need to remember that Kubernetes workloads differ because they represent applications and services deployed within a cluster, each tailored to specific needs, encompassing one or more components. Workloads themselves are driven through Kubernetes objects, software code designed to define the desired state of the application and govern how it should be deployed and scaled.
Gabriele Bartolini, VP of cloud native at EDB, says that as Kubernetes takes on increasingly complex data workloads, the focus is shifting from just proving viability to optimizing performance and efficiency at scale.
This means that enterprises are moving beyond basic stateful services to AI and machine learning operations, challenging us to rethink orchestration, resource management and data durability. He asserts that it’s no longer just about running databases; it’s about pushing the boundaries of what Kubernetes and Postgres can do together with data.
EDB, a Postgres data and AI company, recently announced that its open source Kubernetes operator for PostgreSQL, CloudNativePG (CNPG), has been accepted into the CNCF Sandbox.
“Databases are the number one workload on Kubernetes for a reason: The platform has evolved to make running mission-critical, data-intensive applications not only possible but practical, thanks to self-healing and high availability,” explained Bartolini. “The key is leveraging Kubernetes-native features (like VolumeSnapshots, for instance) to maintain consistency while minimizing disruption. Today’s best practices are built around balancing automation with human insight, ensuring resilience while keeping operational complexity in check. Other critical aspects for databases are storage, with local storage being a more and more common solution in on-premise deployments, especially on bare metal Kubernetes.”
Emerging Patterns in AI/ML Ops
This discussion leads us to the point where we can say that AI and machine learning workloads on Kubernetes are pushing the limits of resource management and orchestration. New patterns are emerging (like gang scheduling and batch processing) that make it possible to optimize complex workflows without bottlenecking core operations.
Bartolini again highlights the critical factors here and says that the challenge is not just technical but strategic i.e., how to design an architecture that balances performance with reliability while minimizing operational overhead. This is where native integrations (plural) and automated workflows become essential, allowing teams to focus on innovation rather than firefighting.
“Running databases on Kubernetes has always been a challenge, not because it’s impossible, but because of the complexity of operators for stateful workloads (like a Postgres database with a primary/standby architecture),” clarified Bartolini. “We’re tackling this with declarative operations and day 2 operations such as automated failover, minimizing manual intervention by extending the Kubernetes controller to understand how a PostgreSQL in high availability works.”
For the EDB team, the power comes from using Kubernetes-native features to enhance security, provide seamless observability integration and enable capabilities such as consistent, automated backups without compromising performance and reliability. It seems clear then that this is about more than just running Postgres in Kubernetes – it’s about doing it in a way that meets enterprise standards for uptime and resilience.
Oh No, Not Monoliths Again
Resonating, reflecting and ramping up from Bartolini’s comments, software engineering advocates and evangelists have plenty to share on this subject. Bright among the luminaries in this space is Dr Holly Cummins, senior principal software engineer at Red Hat. Cummins says that increasingly, Kubernetes workloads need to accommodate AI, but what are the responsibilities, considerations and ramifications for developers working in this arena?
“State has weight,” defined Cummins, in concrete terms. “Our workloads are becoming even more data-centric, so we need to manage them differently. Just like it wasn’t a great idea to have huge monolithic applications, it’s not scalable and it’s not sustainable to have one huge central monolith of data. In other words, we’re breaking up the monolith, again. AI models cannot scale indefinitely. For economic and sustainability, and quality reasons, we want smaller models. But maybe not just a single smaller model; instead, we want orchestrated networks of smaller models. With agents interacting with various parts of the system, this is classic distributed computing. Well, sort-of-classic… and sort of completely new.”
She says that these realities mean that IT organisations have a key responsibility to provide a platform to developers. We should think of this platform like we do other products. The platform needs to be really good and well-suited to the needs of an organization’s developers. A good developer experience on the platform directly translates to happier employees and higher productivity. A bad platform can even cause retention problems for an organisation.
Backstage Portal Pass
“Backstage [an open source framework used for building developer portals] is a nice way of enhancing developer experience and removing friction here,” suggested Cummins. “It’s a pluggable portal that allows the platform team to provide developers with a curated set of templates, views and tools. Backstage is a CNCF project, and a supported version, Red Hat Developer Hub, is available from Red Hat. Red Hat Developer Hub extends Backstage with Role-Based Access Control capabilities and various other extra plugins. Backstage plugins can have all sorts of benefits. For example, the Cost Insights plugin allows developers to monitor cloud costs, which lets them see the cost implications of their architectural decisions.”
Dr Cummins rounds out by reminding us that developers naturally optimise things, so simply putting a FinOps analysis in front of them for any given project is a great way of bringing cloud bills down. There’s also a Cloud Carbon Footprint plugin, which has a sustainability focus. Fundamentally, those plugins make it easy for developers to get the information they need to optimize away waste.
While we may still be some considerable way away from K8s workload management standardization techniques and the widespread productization of software tools that apply at this level, we are at least fully cognizant of the complexity and diversity of technology streams now coalescing in this arena. So we’re talking about it, which is half the battle, right?