Running Machine Learning Applications via Kubeflow

How putting ML closer to users accelerates insights from distributed data at the edge

Víctor Martínez Abizanda

In the midst of expanding digital transformation, data is taking the main role.

A vast amount of data is continuously being generated at the edge. Thanks to new technologies such as machine learning (ML), this data is actively being analyzed to personalize our daily activity. ML and analytic fields that develop patterns through artificial intelligence (AI) algorithms provide insights that will change our perception of how we use data. For example, these new insights generate greater value for companies by improving employee productivity, creating new business opportunities and enhancing customer experience.

Interconnection, the necessary alliance

The interaction between users, data and ML processes is active and global. The goal is to respond or act on a user’s action using a trained model in the fastest and most efficient way possible. The interconnection between data and public clouds where ML processes reside must include placing everything as close as possible to users to reduce latency so you can  improve the efficiency of the data and ML processing and, ultimately, enhance the user experience. Latency-sensitive ML actions, such as real-time decisions or services with user customization, require a more extensive distribution of those mechanisms that capture large amounts of data as soon as possible with faster response times.

Equinix Fabric™

Equinix Fabric directly, securely and dynamically connects distributed infrastructure and digital ecosystems on Platform Equinix®. Establish data center-to-data center network connections on demand between any two ECX Fabric locations within a metro or globally via software-defined interconnection.

Read More
Equinix Fabric

A clear example of this is the connected cars that generate a huge amount of data per hour (estimated at an average of 3 terabytes) and how data processing, analytics and AI/ML processing in the cloud are much more advantageous when located at the edge. The resulting performance benefits and cost savings associated with not backhauling data to a centralized data center over a long-haul network are proof that this local “data adjacency” to the cloud model is extremely beneficial to most companies.

Kubeflow as a platform for ML apps

In the example of the connected cars, the importance of delivering your ML applications at the edge, versus via a centralized cloud platform in your main data center, is critical since the recognition patterns integrated in the driving algorithms of the vehicles (endpoints) are continuously updated. To balance the increased complexity of a distributed architecture, projects such as Kubeflow[i] have the necessary tools to migrate your ML workloads to production and are ready to run on Kubernetes clusters.

Deploying and managing all of these components directly as Kubernetes resources could be quite complex, since Kubernetes requires a certain learning curve. If there is not a broad background tied to the infrastructure to allow for this, then it will not be an easy task. Greater simplicity is provided by the Kubeflow project, which came out in its first production release in February 2020. There is a clear intention to make Kubeflow a standard in ML, and the community is growing more and more.[ii]

Kubeflow is not an application itself, but rather a set of compatible ML frameworks and tools that provide the ecosystem necessary to easily develop, deploy and manage your entire ML application lifecycle within a DevOps framework.

The advantages of Kubeflow being built on Kubernetes

Thanks to Kubernetes, Kubeflow has certain features that make it one of the best options for running ML apps globally on distributed architectures.

The main advantages that the development of Kubeflow offers include:

  • Portability: Thanks to containerization you can deploy the modules that make up your ML applications in any infrastructure that Kubeflow runs. From a local computer to an on-premises or cloud server, this provides the same version of the application in the different Kubernetes clusters that run on a global scale, allowing Kubeflow to handle the abstraction.
  • Scalability: Not all phases of an ML workflow require the same computing power and the demand required of a specific experiment that may also vary based on the environment where it is deployed. This is where Kubeflow being based on Kubernetes is able to maximize the use of resources and allows for scaling based on defined needs. CPUs, GPUs and TPUs will be used depending on the needs of the cluster and its scaling policies, regardless of its location since the foundation is Kubernetes.
  • Composability: Kubeflow allows each of the stages within the process to run as an independent system. In this way, it offers the possibility of loading a specific framework and library for each of the building blocks in your ML workflow. This facilitates the integration of the Kuberflow different tools and their numerous versions, providing harmony within your operation.

It is for these reasons that Kubeflow allows the development of simple environments for executing ML experiments that enables data professionals to dedicate more time to what really matters most, the creation of ML/AI models.

Kubeflow + Kubernetes + Equinix = The Best Formula

Equinix provides the optimal platform to run your ML applications using Kubeflow and Kubernetes. On Platform Equinix®, you can globally bring together and interconnect all the pieces of the data/cloud-based ML/Kubeflow puzzle so you can get the most information from your data.

Our global network of more than 210 International Business Exchange™ (IBX®) data centers enables you to proximately locate data, processing resources and access to cloud ecosystems that provide ML/AL applications for the best possible user experience. And, Equinix Fabric™  provides the direct, dynamic and secure software-defined interconnection that connects this distributed infrastructure to global digital ecosystems and helps you to ensure data protection and privacy compliance.

Learn more about Equinix Fabric or contact us to find out more about our interconnection solutions when deploying Kubeflow-based ML applications.

 

[i] https://www.kubeflow.org/

[ii] https://www.kubeflow.org/docs/about/community/

Subscribe to the Equinix Blog