Migrating virtual machines (VMs) to containers isn’t always simple. It’s a journey, just as migrating workloads to the cloud is a journey. Containers are ephemeral, and they are a great fit for stateless workloads. Many applications are not built to live in an environment that changes and evolves constantly. However, since virtual machines (VMs) don’t change too much — they are a great fit for stateful workloads.
In today’s post, I’ll shed some light on how to successfully migrate VMs to containers and describe what the ecosystem of tools and products looks like. So, let’s get into it and start with how to migrate legacy workloads to containers.
Digital Leaders Summit 2021– Executing an interconnected cloud strategy
Explore how businesses can improve enterprise resiliency to ensure secured and direct connectivity to their distributed cloud-centric digital infrastructure, cloud providers, and data sources across the industry ecosystem.Read More
Using containers for legacy workloads
One of the most common concerns with migrating to containers is how to manage legacy workloads — those that were developed before the introduction of containers in 2013. A typical problem with these workloads is they are not designed to adapt to the volatile nature of a container. Moreover, trying to figure out all of a workload’s dependencies is challenging. And some of those dependencies might not be supported in containers.
There are a few tools and products that are designed to help customers move legacy workloads to containers. For instance, Google Anthos Migrate[i] helps generate a container from a VM when you are using Anthos. We’ll explore the ecosystem of tools and services that are available later in this post.
Regarding Windows workloads, the ecosystem is evolving. Native support for containers has been available since version 10 for desktop and version 2016 for servers to build hybrid Kubernetes clusters. And it is already possible to integrate authentication with Active Directory within a container[ii]. Microsoft has published a set of example Dockerfiles[iii] that customers can use as a starting point to migrate their workloads manually. If you want to take a deep dive into Windows containers, the Kubernetes Podcast from Google has an excellent episode on this topic[iv].
So, once you have the container image, it’s time to publish. But how? Let’s see.
Modernize in small batches
A common pattern for pushing new changes in a system is through the strangler pattern[v] where the idea is to gradually change the system by creating routes to new edges of the old system. In other words, you don´t need to modernize the whole system at once, you can do it in small batches. For instance, you can decouple small parts of the system and create equivalent microservices to containerize it. Inside the system, you can create routes at the networking level that point to the microservice location.
You can run experiments to see how your application behaves when using containers, and then learn how to repeat the same process for the rest of the system. Not all parts of the system may have the same dependencies or be hosted at the same location. You can take advantage of cloud services for containers while at the same time continuing to use your on-premises infrastructure — whether that infrastructure is running in containers or not.
The following diagram illustrates how to implement a hybrid architecture. Start by pointing your users to the cloud, and from there split the traffic as needed. For instance, 95% of the users will still be going to the on-premises infrastructure, and 5% will use containers in Azure. If all the containers are running on the Kubernetes service from Azure, you can use Express Route and an Equinix Fabric™ connection to send all user requests through a private, direct, and secure connection, without using the public Internet. Little by little, you can move all your services to containers, both in the cloud and on premises, and leave all the data on premises.
Now that you’ve identified a strategy to make a smooth transition to containers, let’s talk about the tools, services and products that exist to help you migrate your workloads to containers.
Tap into extensive tools and products ecosystem
There are multiple tools available to migrate your VM workloads to containers. One group of solutions helps you manage VMs in a container platform like Kubernetes. Projects like KubeVirt[vi], Virtlet[vii] and RanchverVM[viii] all allow you to run VMs in Kubernetes as containers. Rancher also recently launched an open-source project called Harvester[ix] that uses KubeVirt to provide a VM experience using containers. A guide[x] is available that explains how to use Harvester in Equinix Metal™.
As you can see, the ecosystem has been evolving to make it easier to run VMs in container platforms while continuing to run VMs alongside containers. However, if you would rather not worry about converting VMs, you can use Google Anthos Migrate[xi] to help you convert VMs to containers automatically.
Maintain a continuous improvement mindset for migrating VMs to containers
As you can see, there’s not a direct and simple answer regarding migrating VMs to containers right now. But the direction the industry is taking is to help you run legacy VM workloads in container platforms. Maintain a continuous improvement mindset when you start experimenting, because the journey to containers will look like a hybrid model for a while. Work in small batches and pay attention to how networking can help make your modernization journey smoother. Projects like Harvester allow you to extend your workloads at the edge. Platform Equinix™ with services like Equinix Fabric, Network Edge, and Equinix Metal help you solve challenges with migrating VMs to containers. For more information, watch our webinar on executing an interconnected cloud strategy.