From Multiple Clouds to Multicloud: The Next Evolution in AI Networking

Today’s dynamic AI workloads require equally dynamic connectivity—made possible only through an integrated multicloud environment

Roger Duclos
Igor Tarasenko
From Multiple Clouds to Multicloud: The Next Evolution in AI Networking

TL:DR

  • Many enterprises run multiple isolated cloud environments instead of true integrated multicloud, limiting their ability to optimize workloads dynamically for AI applications.
  • Private interconnection solutions like Equinix Fabric enable on-demand cross-cloud connectivity without internet limitations, supporting distributed AI workload requirements.
  • Dynamic multicloud networking allows businesses to adapt infrastructure as AI needs evolve, accessing the right models & compute resources at the right time.

The concept of multicloud is nothing new. For more than a decade now, IT leaders have recognized the advantages of acquiring infrastructure and services from more than one cloud provider. These advantages include everything from flexibility to cost-efficiency to risk mitigation.

Today, it’s clear that the early promise of multicloud hasn’t fully paid off yet, and that’s largely due to networking challenges. Many enterprises are still struggling with the complexity of multicloud networking, and this often leads to limited mobility between clouds. In the Flexera 2025 State of the Cloud Report, 86% of organizations said they are using multicloud. However, when asked how they’re using multicloud, 57% of organizations said they keep their apps siloed on different clouds.[1]

A siloed approach makes it difficult to take advantage of the core value of multicloud: using the right cloud for the right workloads at the right time. In fact, one could argue that such an environment isn’t truly multicloud at all: It’s merely multiple cloud.

What makes true multicloud different?

In an integrated multicloud environment, data and workloads can flow between different clouds whenever the need arises—for example, shifting inference workloads closer to clients and data sources, bursting training workloads to available compute hardware, or moving data to where models perform best. When conditions change, the cloud environment can change too. In contrast, many IT teams are essentially running several mostly isolated and relatively static cloud environments that all have to be managed separately, using different tools and processes from different cloud providers.

Due to networking challenges, these different cloud environments have clearly defined boundaries between them. If an application starts out running in Cloud A, it will likely stay there. Even if the situation changes and Cloud B becomes the better option, migrating the application workloads—fully or partially—would likely prove too difficult and time-consuming. Thus, the business will continue to run applications on cloud infrastructure that isn’t the best fit for their needs—overpaying for compute, tolerating avoidable latency, and increasing exposure to security and resiliency risks as workloads become more dynamic.

The main goal of a multicloud environment should be to ensure flexibility with on-demand access to services from multiple cloud providers. Without the right network infrastructure, this kind of flexibility isn’t possible.

How can businesses optimize multicloud networking for the AI era?

AI turns a nuisance into a breaking point. The advent of generative AI and agentic AI is increasing east-west traffic, intensifying data gravity, and placing stricter demands on latency and reliability for real-time inference. These changes are exacerbating many of the problems already found in poorly integrated multiple-cloud environments. Static cloud infrastructure can’t support dynamic data flows, and AI data flows are about as dynamic as it gets.

Emerging AI workloads are inherently distributed. Enterprises likely have AI training or fine-tuning workloads that need very high compute capacity, as well as AI inference workloads that must be deployed at the edge to ensure low-latency proximity to business services (e.g., AI agents) and data sources. And since these services and data sources are found in many different places throughout the world, access to distributed AI infrastructure is essential.

In the AI era, the network has become the primary constraint. Models, data and compute are increasingly distributed, and without a network designed for dynamic, cross-cloud movement, even the best AI infrastructure will remain underutilized. Once a business has the right multicloud network in place, they’ll be able to connect with an ecosystem of different partners quickly, and that will make it much easier to source the models, tools and data they need to roll out their AI-powered applications.

Also, multicloud flexibility is essential because there are still so many unknowns with AI:

  • A certain LLM running in a certain cloud may be the right choice today, but how can you be sure it will still be right a year from now?
  • An enterprise might use a particular tool for LLM guardrails now, but new offerings are emerging all the time.
  • An AI observability system from a startup may be the best option available today, but it may be better to switch once incumbent players catch up.
  • As time goes on, businesses may need to change vector databases to further optimize their RAG data pipelines.

With dynamic multicloud networking capabilities, none of these issues are insurmountable. It doesn’t matter if your AI needs change, because you’ll have a multicloud environment that can change too.

What does dynamic, AI-ready multicloud networking look like?

The public internet is often the default choice for cloud connectivity because it’s familiar and easy to access. But it was never designed to support the requirements of distributed AI workloads.

AI workloads depend on predictable latency, consistent throughput, and reliable east-west traffic across clouds and regions. The internet, by design, cannot guarantee any of these. While these limitations may be tolerable for traditional, static applications, they become a hard constraint for distributed training, real-time inference and agentic AI workflows.

In addition, the internet can’t meet the data sovereignty and security requirements of AI workloads. As models and data move across clouds and regions, organizations must enforce strict, verifiable controls over where data is stored, processed and transmitted. Guaranteeing compliance with specific boundaries is now mandatory. AI also broadens the security threat surface, enabling automated attacks and new risks like data poisoning and inference attacks. This demands stronger safeguards and continuous verification, which the internet simply can’t provide.

The challenge is clear: Distributed AI requires predictable, secure, high-performance connectivity across clouds and regions, but most multicloud environments still rely on best-effort networking with siloed policy controls. As long as enterprises depend on the public internet to move AI data and workloads between clouds, true integrated multicloud will remain out of reach.

Equinix is building for the future of multicloud networking

At Equinix, we’re working to make private connectivity solutions just as easy to use as the internet is today. This will help ensure that businesses don’t have to sacrifice performance, security and reliability for the sake of convenience.

With Equinix Fabric®, our virtual interconnection solution, enterprises get the foundation they need to establish on-demand private connections with partners and service providers, including many of the leading names in AI today. Equinix Fabric is fully programmable, with everything configured via console, API SDKs, or IaC tools such as Terraform and Pulumi. It’s also available in 60+ global metros across six continents, so customers can ensure low-latency connectivity to their chosen partners, wherever those partners may be.

The Equinix networking portfolio also includes Equinix Fabric Cloud Router, a virtual routing solution that’s available as a built-in component of Equinix Fabric. It enables secure, efficient cloud-to-cloud connectivity, without the need to deploy physical routers. When traffic can move directly from one cloud to another without being routed through an on-premises data center, true multicloud applications become just as quick and easy to deploy as any other application, even for highly dynamic use cases like AI.

Finally, Fabric Intelligence is set to become the newest addition to our networking portfolio in 2026. In addition to building the right networks for AI, Fabric Intelligence will help customers build their networks with AI. They’ll be able to automate their network orchestration by applying the power of agentic AI and MCP servers. As a result, they’ll get a network that’s optimized for dynamic, multicloud AI workloads, without the need for network engineers to manually implement changes.

With the right networking solutions in the right places, enterprise IT leaders are learning that multicloud networking isn’t just a box they have to check; it can become a true source of strategic advantage. To learn more about how businesses around the world are capturing this opportunity, read our research report on the global state of hybrid multicloud networking.

 

[1] 2025 State of the Cloud Report, Flexera, March 2025.

アバター画像
Roger Duclos Senior Director, Product
アバター画像
Igor Tarasenko Senior Director, Product Software Arch and Eng
Subscribe to the Equinix Blog