Continuous Delivery in Hybrid Kubernetes Environments

Shipping software with security and performance in a multi-cluster architecture

Christian Melendez

What does it take to create continuous delivery (CD) pipelines when you have multiple Kubernetes clusters at different locations? Critical aspects like connectivity, security or networking are vital in succeeding when doing CD, especially when your architecture requires multiple clusters. There are some challenges you may encounter when implementing CD in hybrid environments, however, there are some recommended practices you can deploy when you want to ship software continuously.

Equinix Cloud Exchange Fabric™

Establish data center-to-data center network connections on demand between any two ECX Fabric locations within a metro or globally via software-defined interconnection.

Read More
What’s certain is that you need to have secure and private connectivity between your clusters.

Why you would need a multi-cluster Kubernetes architecture

Reasons vary as to why an enterprise would need to have a multi-cluster architecture. For instance, an enterprise might need to have one cluster in Europe and another one in the U.S. for data sovereignty. Another company might need support for hard multi-tenancy as they provide a service offering that runs on top of Kubernetes clusters. And others might need to have an active-active architecture within two regions to offer a highly available system or to minimize the blast-radius.

However,  when you have to work with multiple Kubernetes clusters, there are some challenges around operations and management, when doing CD, which include the following:


Having multiple clusters in different locations means you need to consider things like identity access management (IAM), secrets management, networking topology, or firewalls. Even though in Kubernetes, you have role-based access control (RBAC), each cloud provider has its own IAM service to give pods permission to interact with other cloud services. In the case of AWS, you’d use IAM to grant access to a pod to S3. In Azure, you can give permission to a pod for other Azure services by integrating it with an active directory. Therefore, you have to plan how to apply the “least-privilege” principle in each Kubernetes cluster to only give users access to what they really need.

Moreover, each cloud provider has its key management service (KMS), which means you have to think about how to secure your master key. To solve this type of problem, you can leverage a solution such as Equinix SmartKey™, a cloud-agnostic key management and cryptographic service. For instance, you can integrate SmartKey with Kubernetes, and all the data in ETCD(a distributed key-value store used for Kubernetes)will be stored encrypted by default. Or, you can also export your SmartKey keys to a cloud provider to continue using their services but centralizing the master key.


Another challenge remains in networking and firewalls and how you would inter-connect all clusters and apply firewall rules. In Kubernetes, you have networking policies to limit access to pods, and with tools like Istio, you can restrict access at the service object level. But what about connectivity?

First, you need to define a networking topology for your Kubernetes clusters. Do you need a mesh network where all clusters will communicate with each other? Perhaps you only need to have a bastion host that will orchestrate the deployments to all clusters. What’s certain is that you need to have secure and private connectivity between your clusters.

On Platform Equinix®, we have Equinix Cloud Exchange Fabric™ (ECX Fabric™), which you can use to connect your on-premises workloads with Kubernetes clusters in different cloud providers. For instance, think about having a bastion host with Jenkins running in an Equinix location. Then, using Jenkins, orchestrate the deployments to the different Kubernetes clusters. Another scenario is where you decide to orchestrate deployments with Microsoft Azure Pipelines. You could use Network Edge, which provides fast and easy access to leading vendors’ virtual network services on Platform Equinix, to interconnect with all of your cloud vendors to maintain a direct, private and secure line of communication between the Kubernetes clusters.

Additionally, another challenge comes with having to manage the kubeconfig file, which is where you define how to connect to all of the different clusters. There’s a tool called kubectx[i] that helps you change context easily, and the author Ahmet Alp Balkan has written a great article[ii] about the topic. And there’s also the Kubefed[iii] project where you can work with multiple Kubernetes clusters from a single API.

Networking Efficiency

Networking within a Kubernetes cluster is simple given that it’s a flat network that avoids having to use a networking address translation (NAT). However, when you work with hybrid environments, you might need your applications to communicate with other applications outside the cluster. In this case, besides having a direct connection to reduce networking hops and improve latency, you also need to have a service discovery mechanism. For instance, you could use a service mesh like Consul[iv] to register Kubernetes services or even external dependencies like a database hosted in a virtual machine (VM) on-premises.

Deployment Management

Last but not least, there’s the challenge of deployment management. When doing CD in Kubernetes, you have several tools options like Jenkins, Spinnaker, Microsoft Azure Pipelines, or GitLab. But as long as you treat your CD pipeline as if it was your application code, you’ll more likely be able to deliver it more frequently and in small batches. Treating your CD pipeline as code also means that you have to manage several YAML manifests. Therefore, you’ll need help from tools like Helm to pack all your deployment manifests. Or emerging tools like Pulumi[v], where you define a delivery pipeline using existing programming languages like Typescript or Golang.

A good pattern is to have a single delivery pipeline with different environments where you use the same container images, and Kubernetes manifests. You can make use of ConfigMaps or Secret Objects in Kubernetes to define the resources an application uses. Additionally, implementing feature flags will give you more flexibility to deploy features. And strategies like canary deployments will help to release software more carefully. The anatomy of a pipeline should be consistent, not one pipeline per environment, where you treat all environments as if they were a production environment – with the same rules, just at a different scale.


Working with multiple Kubernetes clusters brings other challenges like monitoring, not discussed here. However, you have to start with the foundations  ̶  and networking and security play a critical role in succeeding when working with hybrid environments. Especially if you want to have a good base when doing CD to ship software fast without compromising quality, security and performance. And you now have options like ECX Fabric, Network Edge, and SmartKey to face the challenges around networking and security.

Stay tuned, we’ll be digging into more on working with Kubernetes in future articles.

Read more about directly and securely interconnecting to multiple cloud platforms with Equinix Cloud Exchange Fabric.








Christian Melendez
Christian Melendez Global Solutions Architect