Is Just One Kubernetes Cluster Enough?

How to accelerate multicloud communications when deploying multiple applications

Víctor Martínez Abizanda
Is Just One Kubernetes Cluster Enough?

Applications that are consumed globally can run into challenges that a standalone Kubernetes cluster may not be able to solve, such as latency, fault tolerance, isolation and/or even legislation changes. In this article, we will explore scenarios where connecting several clusters together can help improve application performance while reducing cost. Platform Equinix® makes it easy to enable low latency communications for these deployments that are fast and secure.

Network Edge - Modernize Your Network, Virtually

Equinix has built a dynamic network automation tool, enabling you to deploy virtual network services at the digital edge. Network Edge provides virtual network services that run on a modular infrastructure platform, optimized for instant deployment and interconnection of network services.

Read More
network-edge

Adjusting Kubernetes resources

One of the benefits that containers and Kubernetes offers is the flexibility to allocate only the resources needed that help optimize computer density. For example, we can define the number of containers running on a pod, the number of pods running on nodes and the nodes that are needed to build a cluster. But what about the number of clusters running in multiple clouds? This is another variable that can be defined in a global solution.

The benefits of Kubernetes multiclusters in multiple clouds

Some of the pros of deploying a Kubernetes multicluster solution in different clouds include:

  • Performance: Run computer resources close to your data for lower-latency and faster response times.
  • Redundancy: Protect your availability in the event of a cluster failure.
  • Agnosticism: Avoid vendor lock-in by deploying your workloads in multiple cloud destinations.
  • Compliance: Deploy your workloads and data in specific regions to comply with company and government data protection and privacy regulations.

But of course, it’s not all a bed of roses. Managing more than one cluster across multiple clouds can add complexity to some areas, such as monitoring, deployments or traffic management.

Network Edge and ECX Fabric enable fast and secure connections

The communications between clusters of Kubernetes have an important role to play in this type of architecture. Equinix simplifies communications between Kubernetes clusters with its Equinix Cloud Exchange Fabric™ (ECX Fabric™) and Network Edge solutions.

ECX Fabric enables private, software-defined interconnections between different clouds and/or on-premises clusters, if a hybrid solution is required. With Network Edge, you can deploy virtual network devices from market-leading vendors to connect securely and privately to multiple clouds across distributed locations.

Figure 1. From regular private access to Kubernetes multicluster solution powered by Equinix.

A simple and intuitive portal allows you to choose between different virtual routers and firewalls, so you can quickly deploy networking and security resources where they are most needed. The virtual device acts as the hub for connecting to the clouds selected for your Kubernetes multicluster solution. From the device’s menu, you can create direct connections through ECX Fabric.

Easing the connection in the era of microservices

Microservices offer invaluable benefits in the development and execution of new applications and also provide a framework of control over monolithic or SOA architecture facilities.

Service mesh networks

Service mesh networks allow a level of control over microservices while mounting the Kubernetes multicluster solution. For those who are not yet familiar with this concept, service mesh is a dedicated, low-latency software infrastructure that handles communication between various microservices.

Service mesh is usually implemented using the sidecar pattern, which means that each container runs a microservice, while another one runs with the proxy function to control incoming and outgoing requests. This way, there is no direct communication between microservices, only between proxies.

The need for a service mesh tool

When looking at managing traffic between clusters of a global microservices-based application, Istio can be a good choice. It is an open source framework for connecting, securing and managing microservices. It is easy to install, and with the configuration of its operation also based on YAML templates. This manages all those challenges associated with using microservices, such as:

  • Service discovery
  • Traffic management
  • Reliability
  • Security
  • Access control

Kubernetes multicluster use cases

Now that we know the basics to deploy your Kubernetes multicluster solution, here are some examples where using this approach can be useful:

Improve performance:  Imagine an on-premises Kubernetes cluster running web portal microservices. Some queries are using a cloud service to manage the vast amount of data. In this case, you can deploy a cluster using the Kubernetes cloud managed solution and run only those microservices that continuously interact with the data warehouse service.

Figure 2. Run your compute workloads close to cloud services.

High availability: You decide to deploy the same critical microservices over two different cloud providers, and they are called from a central on-premises Kubernetes cluster. Istio will manage the endpoints of the microservices and apply request balancing to balance traffic to the operating cluster in the event of failure.

Canary deployments: Like the previous case, you can try different versions of the microservice, with each one executed in a different cloud. You can assign it a weight and modify it based on a deployment process.

Compliance: Your data must remain in a particular region, and the backend services that operate with it must run in that same Equinix International Business Exchange™ (IBX®) using an on-premises cluster. You can deploy the frontend services using a Kubernetes cloud managed provider solution, taking advantage of its flexibility from cloud, and call the on-premises backend services just when the two need to interact.

Figure 3. Keep your container workloads near your data.

Conclusions

We’re in a multicloud era, where applications are designed to run in and across multiple clouds. Kubernetes offers an unparalleled framework to standardize their management and execution. Multicluster architectures are an extra color on the palette to consider when developing applications today and into the future.

Equinix provides the services necessary to give you all the flexibility and power of communications between your Kubernetes clusters. This means you can think big without limiting your designs.

Learn how to speed multicloud communications with Network Edge services.

Subscribe to the Equinix Blog