Enterprises using container-based applications require a scalable, battle-tested, and robust services fabric to deploy business-critical workloads in production environments. Services such as traffic management (load balancing within a cluster and across clusters/regions), service discovery, monitoring/analytics, and security are a critical component of an application deployment framework. This blog post provides an overview of the challenges and requirements for such application services.

Challenges

Common application services such as load balancing, network performance monitoring, and security that are available in conventional applications often need to be implemented or approached differently in the context of container-based microservices applications. Here are some of the challenges in providing application services for these applications:

Granularity - In container-based microservices applications, a single service is often represented at the infrastructure level by multiple containers residing on multiple hosts (servers). From a networking standpoint, each of these containers is a unique application endpoint that requires the same load balancing and traffic management services as conventional applications.

Automation - Application-centric enterprises are looking to automate repetitive networking operations and remove error-prone manual steps. They want to achieve agility through continuous integration and continuous delivery (CI/CD) practices for rapid application deployments. Application networking services need to be API- driven and programmable.

Visibility and Security - Application visibility is especially important in the context of container-based applications. Application developers and operations teams alike need to be able to view the interactions between services to identify erroneous interactions, security violations, and potential latencies.

Elasticity - Application service elasticity is even more important in the context of container-based applications given that containers can be spun up or taken down at a much faster pace than traditional infrastructure. Applications running on containers in data centers or clouds are able to take advantage of this agility in the compute layer but network services still remain a bottleneck.

Requirements

Here are the requirements for container-based environments:

Traffic Management and Local Load Balancing - Local load balancers or ADCs (Application Delivery Controllers) need to provide application networking services such as load balancing, health monitoring, TLS/SSL offload, session persistence, content/URL switching, and content modification.

Traffic Management and Global Load Balancing - Global load balancing directs clients to the appropriate site/region based on several criteria including availability, locality of the user to the site, site persistence, site load, etc.

Service Discovery - Maps service host/domain names to their Virtual IP Addresses where they can be accessed.

Monitoring/Analytics - Enterprise applications deployed in production require constant monitoring and alerting based on application performance, health, brownouts for a small fraction of users, etc.

Security - Enterprise-class secure applications require TLS/SSL cert management, microservice-based network security policies that control application access, DDoS protection/mitigation, and Web Application Firewalling (WAF).

Distributed Services Fabric for OpenShift

OpenShift offers an excellent automated application deployment framework for container-based workloads. And Avi Networks provides a proven services fabric to deploy container based workloads in production environments using OpenShift clusters.

Avi Networks has created a distributed architecture based on software-defined principles to address the traffic management, service discovery, security, and analytics needs for container-based applications running in OpenShift environments. The Avi Vantage Platform provides a container services fabric with two major components:

Avi Controller - A cluster of up to three nodes that provide the control, management, and analytics plane for the services fabric. Avi Controller communicates with OpenShift master, deploys and manages Avi Service Engines, configures services on all Avi Service Engines and aggregates, accumulates telemetry data from Avi Service Engines.

Avi Service Engines - A service proxy deployed on every OpenShift node providing the application services in the dataplane and reporting real-time telemetry data to the Avi Controller.

Application Deployment Workflow

Avi Vantage provides a drop-in local traffic management solution with a rich ADC feature set for the OpenShift cluster:

Deploy Application/Service - The application owner creates a deployment, a service and a route/ingress object (if necessary) in OpenShift. The application owner specifies any extra policies and service configuration (if any) as annotations in the service and route/ingress objects.

Pod Creation - OpenShift masters create the appropriate number of pods/replicas in the cluster.

Avi Virtual Service Creation - Avi Controller automatically creates a virtual service or a GSLB (global service load balancing) service with the pods as pool members and using any extra policies from annotations. Every OpenShift project namespace corresponds to a tenant in the Avi Vantage Platform.

Application Scale/Upgrades - App owner scales deployment up/down (Avi also performs analytics-driven autoscaling) or deploys another version of the application in a Blue-Green pattern. Avi Controller automatically keeps pools up to date with new membership and performs the appropriate traffic management function required by the Blue-Green deployment.

In our next blog post we will explore several of the unique feature sets and use cases that Avi Networks and OpenShift deliver together for container-based applications.

For more information, please download the white paper Application Networking Services for OpenShift-Kubernetes Clusters.

All entries in this series