Architecture Patterns for Kubernetes at the Edge

Exploring the requirements for Kubernetes architectures at the edge

Christian Melendez
Architecture Patterns for Kubernetes at the Edge

Edge computing continues to increase and enterprise application developers and hyperscale cloud providers (Google Cloud, Microsoft Azure, etc.) are betting on Kubernetes at the edge with open source projects like k3s Lightweight Kubernetes[i], Akri[ii] and Google Anthos. However, different challenges arise when companies want to have a consistent approach for managing workloads at the edge using Kubernetes.

For instance, the three major reasons companies opt for edge computing are low latency, data privacy, and bandwidth scalability. These are critical factors when architecting use cases at the edge such as for  internet of things (IoT) applications like autonomous cars. Here decisions need to be made extremely fast or tragic consequences can result. Hence, high-performance, low-latency private networking plays a crucial role in succeeding when working with edge workloads and interconnecting them to cloud services.

Learn more about Equinix Fabric™

Equinix Fabric™ directly, securely and dynamically connects distributed infrastructure and digital ecosystems on Platform Equinix®. Establish data center-to-data center network connections on demand between any two Equinix Fabric™ locations within a metro or globally via software-defined interconnection.

Download
Equinix Fabric

We’ll start by exploring the requirements for Kubernetes architectures at the edge on Platform Equinix®.

Requirements for successfully deploying Kubernetes architectures at the edge

At the edge, you typically have to work with a smaller footprint of servers or devices that don’t have enough capacity to run Kubernetes effectively, such as with the majority of IoT devices that are just sensors. You also have to consider that there may be times when connectivity is a constraint, either because of latency issues, bandwidth limitations or the disconnection of devices.

Interconnection between the different architectural components and locations is the first requirement—for instance, the connectivity between your on-premises infrastructure with your cloud and edge resources. You can’t depend on the public internet as a network at the edge because of its unpredictable connection routes. Therefore, you need a direct and private line for communicating between resources. You might also need a private connection for security reasons to reduce the risk of someone else “sniffing” your network traffic.

Moreover, automation is one of the crucial features that make Kubernetes so attractive. Therefore, it would help if you also had a way to provision hardware automatically as you do in a cloud environment with tools (e.g., Terraform) that use software to provision and manage any cloud, infrastructure or service. Finally, you’d need the help of open source projects (e.g., k3s, microk8s, KubeFed, or KubeEdge) for optimizing edge workloads in Kubernetes. I’ll be explaining where and how these projects fit into the big picture.[iii]

Let’s now explore three architectural patterns that cover these requirements.

Kubernetes edge architecture patterns

The following three architecture patterns best showcase how Kubernetes can be used for edge workloads, along with all the different elements you’ll need to build an architecture that matches each application requirement ꟷ low-latency, data privacy and bandwidth scalability.

Kubernetes Clusters at the Edge

The simplest way to get started is by deploying a whole Kubernetes cluster at the edge. However, instead of deploying a high availability cluster, you can use projects like k3s or microk8sto to implement a minimal version of Kubernetes in a single-server machine. Then, you can use platforms like Google Anthos to manage and orchestrate container workloads on multiple clusters. At Equinix, we’ve been working together with the Google Anthos team to help deploy Kubernetes using Equinix Fabric™ software-defined interconnection, Network Edge virtual network services and  Equinix Metal™ automated, bare metal-as-a-service. For instance, you can find a basic Terraform template on GitHub to deploy Anthos on Equinix Metal automatically.

The following diagram shows what this pattern looks like when you have a Kubernetes cluster running in a cloud provider, a minimal Kubernetes cluster running on Equinix Metal using k3s and interconnection using Equinix Fabric.

You can find more information on our documentation page about how to setup k3s on Equinix Metal. Additionally, there’s another tutorial that guides you on how to spin up a Kubernetes cluster in just ten minutes on Equinix Metal.

Kubernetes Nodes at the Edge

For those cases where the type of infrastructure is limited at the edge, and you can’t put a cluster there, you can have a Kubernetes node at the edge and put your main Kubernetes cluster at a cloud provider or in a colocation data center. Then, you can deploy virtual machines to the edge using Equinix Metal.

Networking becomes even more important in this pattern. To have Kubernetes nodes at the edge, you can use an incubating project from the Cloud Native Computing Foundation (CNCF) called KubeEdge.[iv] With KubeEdge, the Kubernetes control plane can reside in the cloud and Kubernetes nodes, or even in devices at the edge, with an agent to interact with the Kubernetes API. Additionally, other KubeEdge components can help you with things like communications with IoT devices using the MQTT lightweight messaging protocol for small sensors and mobile devices or synching devices to the cloud.

Besides KubeEdge, there’s a paper for the project FLEDGE[v], a Kubernetes compatible edge container orchestrator, where the authors show the results of how the right networking implementation is vital at the edge.

The following diagram represents this pattern:

Kubernetes Devices at the Edge

Lastly, the third pattern has devices at the edge. KubeEdge fits into this pattern as well, but Microsoft recently released Akri, an open-source project for those small devices where you couldn’t install k3s. Akri registers as native Kubernetes resources leaf devices such as IP cameras and USB devices at the edge. You’d still need to have Kubernetes nodes at the edge (like the diagram from the previous pattern), but you don’t need to install Kubernetes on a device as Akri will register those devices connected to the same network.

Kubernetes at the edge continues to evolve

Kubernetes is great at offering a common layer of abstraction across different environments. Many companies are looking at Kubernetes for its extensibility, portability and scalability. However, Kubernetes at the edge is just beginning to get traction, and it’s been evolving during recent years with projects like k3s, microk8s, KubeEdge and Akri. However, the big picture still has missing pieces such as device discovery, governance and data management.

At Equinix, our contribution is offering a software-defined interconnection solution (Equinix Fabric), virtual network services (Network Edge) that can be deployed in minutes, and physical infrastructure  (Equinix Metal) at software speed. Additionally, we understand that automation is key, which is why we’ve invested in the Terraform community. We’ve also contributed to the Kubernetes community in different ways and tripled our investment to the CNCF.

Want to learn more? Check out our Equinix Fabric data sheet.

You may also want to read:

Interconnection Amplifies the Value of Bare Metal Deployments

Revolutionize the Way You Build and Manage Your Network with Network Edge

 

[i] k3s: Lightweight Kubernetes

[ii] Announcing Akri, an open-source project for building a connected edge with Kubernetes

[iii] MicroK8s – Zero-ops Kubernetes for developers, edge, and IoT

[iv] KubeEdge

[v] FLEDGE: Kubernetes Compatible Container Orchestration on Low-resource Edge Devices

Subscribe to the Equinix Blog