Migrating to Kubernetes? Start Your Journey With Hybrid Workloads

Equinix and Microsoft Azure ExpressRoute accelerate access to Kubernetes services

Christian Melendez
Migrating to Kubernetes? Start Your Journey With Hybrid Workloads

Kubernetes is on everyone’s lips these days. As the leading open-source application orchestration software for deploying, managing and scaling containers in the cloud, many companies are planning to migrate their workloads to Kubernetes. However, not all companies are ready to start this journey. It is recommended to begin the migration with a hybrid approach, where you can learn how to run systems with Kubernetes first. For instance, companies can configure a deployment strategy to run Kubernetes on-premises or in the public cloud side-by-side, with an existing on-premises VM-based system in production by mirroring traffic.

Going hybrid with Kubernetes is not a problem as almost all cloud providers have a managed-service offering. And thanks to open-source projects like kubeadm, spinning up a Kubernetes cluster on-premises is possible. Moreover, in case you want to coordinate multiple clusters from a single API, the project kubefed is in the works as well. At the recent Velocity conference from O’Reilly in Berlin, Bastian Hofman from Syseleven did a demonstration about how to work with cross-region deployments in Kubernetes with kubefed. But even though the Kubernetes ecosystem is significant, and there are a lot of options, many of them are for what I call, “Kubernetes Day 2,” which I will be covering further in a future blog.

Options for hybrid workloads in Kubernetes

You don’t have to start with a sophisticated approach; there are enough challenges already in trying to modernize systems to use containers. Even a simple solution as having a load balancer like NGINX in front of your system will work if you’re running on-premises. Or, you can also use an API Gateway that runs on Kubernetes, like the project Ambassador that works as an edge proxy between services. Additionally, public cloud providers have services like AWS Route 53 or Microsoft Azure Traffic Manager, where you can configure the percentage of traffic that will be sent to a Kubernetes cluster. Although, you have to be careful about DNS cache as that’s something you can’t control. Specifically, users may point to an older version of the system and receive errors that will result in having a bad experience when using the system.

Not all companies are ready to start the journey toward Kubernetes. It is recommended to begin the migration with a hybrid approach.

Another approach is to use a service mesh, which is a networking layer, to control how services communicate with each other. And you can remove the dependency on DNS cache, which I mentioned previously. For instance, you can enforce secure networking policies such as allowing communication only with secure protocols like HTTPS. Because the service mesh intercepts the traffic, it can emit telemetry to help you understand complex networking topologies. Or you can control how to distribute traffic as if it was a load balancer; which, in our mission to have hybrid workloads with Kubernetes, fits perfectly. Istio is a popular service mesh that you can deploy in Kubernetes. Instead of having a load balancer that distributes the traffic, you can use Istio as the front-facing part of the system. An additional benefit is that all these policies can be defined through declarative manifests in YAML, helping you to automate any future change. To illustrate this example, the architecture diagram below represents how to configure hybrid workloads between public cloud Kubernetes and an on-premises infrastructure on Platform Equinix®.

This use case has a Kubernetes cluster running in Microsoft Azure that connects with the on-premises infrastructure via a direct and secure Microsoft Azure ExpressRoute circuit and Equinix Cloud Exchange Fabric™ (ECX Fabric™) to provide lower latency and greater bandwidth than traditional network infrastructures for faster performance and direct and secure connectivity. This private interconnection between the on-premises and Microsoft Azure environments bypasses the public internet, which poses increased congestion and security risks.

Kubernetes is becoming a commodity, as it is a platform for building platforms.

Users are accessing the production system on-premises via Istio with Kubernetes, and is distributing the traffic between on-premises VMs and Kubernetes nodes. Istio not only splits the traffic, it also mirrors it in case you want to experience running Kubernetes in production first. At some point, you might decide to keep specific workloads on-premises like databases. Istio allows you to configure those networking policies without your users or your application noticing that you’re using it. Additionally, you can leverage service mesh benefits from day zero, like the telemetry or secure transport protocols.

Removing barriers to accessing Kubernetes services in Microsoft Azure

Traditional networks cannot provide the level of performance, security or governance required to support dynamic hybrid clouds, such as those enabled by Microsoft Azure. Network architecture complications, including increased congestion, high latency and lack of visibility, can prohibit IT organizations from efficiently pushing workloads between on-premises infrastructures and Microsoft Azure. By leveraging Microsoft Azure ExpressRoute and ECX Fabric, you can accelerate cross-premises connection among enterprises’ on-premises data centers, distributed colocation data centers and Microsoft Azure for hybrid cloud workloads. Equinix supports Microsoft Azure  ExpressRoute in 23 global markets on Platform Equinix, where Microsoft customers choose the optimal geos for migrating their applications and services to Kubernetes in Azure with Equinix.

What lies ahead in the future

Kubernetes is becoming a commodity, as it is a platform for building platforms, according to Kelsey Hightower.[i] And many companies are doing that already. For example, VMWare recently launched Tanzu, which is a way to operate multiple Kubernetes clusters in one place. Also, Google made generally available its project to migrate virtual machines to containers with Anthos Migrate, which will help you to focus on configuring hybrid workloads. And the open-source community is active in this topic as well, with projects like Kubevirt, KataContainers or WeaveIgnite to run virtual machines in Kubernetes.

The Kubernetes ecosystem will continue evolving, and there are a lot of options and strategies already available to start your migration journey. Which one is the best option for you? It depends on your budget, skills, timeline and experience. A conservative approach would be to use a service mesh like Istio or an API Gateway like Ambassador. Whichever method you choose, make sure to use direct and secure, private connections, such as those enabled by Microsoft Azure ExpressRoute and ECX Fabric to provide a smoother and higher-quality experience for your users.

Learn more about Microsoft Azure ExpressRoute and ECX Fabric.

[i] https://twitter.com/kelseyhightower/status/935252923721793536?lang=en

Subscribe to the Equinix Blog