Looking Back on a Year of AI Acceleration

Enterprise AI has evolved significantly in the past 12 months, and Equinix is prepared to meet our customers’ changing priorities

Jon Lin
Looking Back on a Year of AI Acceleration

TL:DR

  • Enterprise AI is shifting from model training to real-time inference, increasing demand for distributed infrastructure that delivers low latency and measurable impact.
  • Equinix and NVIDIA enable this shift through AI Factory solutions and the Distributed AI Hub that connect data, models and partners across locations.
  • Customers are accelerating AI results, closing AI-driven deals and scaling experiments faster by using global interconnection to move AI from promise to production.

It’s hard to believe it’s already been more than a year since we announced Equinix AI Factory accelerated by NVIDIA at last year’s GTC. This fully managed, enterprise-grade AI infrastructure solution represented our bold vision for the future of AI, and the role that Equinix and NVIDIA would play in enabling that future together.

In the months since, our vision for the future of AI started to become reality. We saw growth in the AI market by every conceivable metric: more data, more models, more applications and more interconnected partners. But the AI market didn’t just grow: It also matured. Business leaders are no longer just exploring AI. They want to achieve measurable, real-world impact. We’ve seen first-hand the growing demand for AI infrastructure: In our Q4 2025 earnings report, we announced that we had closed more than 4,500 deals during the quarter, with approximately 60% of the largest deals driven by AI workloads.

By now, most businesses have had enough time to train the models they need or acquire them from partners. But the models alone aren’t enough. Businesses need low-latency, high-throughput inference to help them put those models to good use. And low latency starts with infrastructure at the edge to ensure proximity between distributed data sources and processing locations.

Equinix and NVIDIA help shift AI from promise to production

Today, enterprises are facing pressure to get AI right. Shareholders and boards are demanding ROI on a compressed timeline, and the decisions that leaders make today will determine whether they can meet that demand. As enterprises shift their focus from training models to performing inference in near-real time, their infrastructure requirements will shift accordingly.

The biggest challenge that enterprises face as they attempt to implement future-ready AI infrastructure is the fact that AI doesn’t live in one place. Models, data and workloads are distributed across different locations and environments, which in turn means that AI infrastructure must also be distributed.

Our joint customers are able to dream big with their AI strategies and leave the hard work of managing distributed AI infrastructure to us. Our AI Factory solution gives them everything they need to enable advanced AI use cases, including powerful processing hardware from NVIDIA and networking capabilities, edge presence and ecosystem access from Equinix.

Alembic Technologies is one customer that’s using NVIDIA hardware inside Equinix data centers to unlock the future of AI. The company, a marketing intelligence SaaS provider, built multiple AI Factories, including a liquid-cooled NVIDIA DGX GB200 NVL72 SuperPOD supported by scalable, reliable Equinix infrastructure. This allowed the company to pursue its AI goals with cloud-like speed and better control, predictability and governance. Watch the video below to learn more about the Alembic story.

With our combination of global reach, scalable networks and ecosystem density, Equinix is uniquely positioned to help our customers take AI from promise to production.

Global reach

With 280 colocation data centers spread across strategic metros worldwide, Equinix is well suited to support our customers as they deploy distributed AI infrastructure.  We can also help customers get the control they need to deploy infrastructure in specific locations to meet their data sovereignty requirements.

Scalable networks

AI workloads are placing increased demand on enterprise networks, and we’re working to address this issue for our customers. Starting in 2026, we’ll offer physical ports with up to 400 Gbps bandwidth and Equinix Fabric® virtual connections with up to 100 Gbps bandwidth. Equinix customers can use these high-bandwidth connections to move AI data traffic across their own distributed infrastructure or to their AI ecosystem partners.

Ecosystem density

Since the Equinix ecosystem includes thousands of enterprises and service providers, there’s a very good chance that our customers will be able to find their chosen AI partners already at Equinix. Of course, this includes leading names in AI hardware like NVIDIA, but also model providers, neoclouds and SaaS specialists.

The global auto parts manufacturer Continental is one customer that has benefitted greatly from access to the Equinix ecosystem. The company deployed hardware from NVIDIA and IBM inside an Equinix data center to support the Advanced Driver Assistance Systems (ADAS) team in its goal of making vehicles safer.

By placing our IBM storage and NVIDIA GPU cluster in an Equinix ‘AI-ready’ data center in just two weeks, we had the infrastructure and interconnection we needed to increase the number of AI experiments by 14x, speeding our time to market." - Robert Thiel, Principal Architect and Guild Master Computer Vision & Artificial Intelligence, Continental AG, Business Area Autonomous Mobility

Introducing the next step in AI infrastructure: Equinix Distributed AI Hub

GTC just passed and we announced the next step in the evolution of AI infrastructure. The new Equinix Distributed AI™ Hub framework is about enabling AI infrastructure on a global scale, across many different locations and environments. It builds upon the foundation we created last year when we announced our AI Factory solution with NVIDIA.

The Equinix Distributed AI Hub functions as the connective tissue that holds your distributed AI infrastructure together. It provides a single convergence point for AI datasets, models and partners across multiple cloud environments and locations. This means that both public and private datasets and open and closed models can all come together in the same framework. In short, it represents a simpler, smarter and more connected way to run and scale AI workloads.

It’s supported by Equinix Fabric Intelligence, a suite of network automation solutions that help enterprises reduce manual effort and enable proactive network optimization. Across all distributed data sources and endpoints, Fabric Intelligence orchestrates, automates, learns, and enforces policies.

Equinix and Palo Alto Networks address the unique security requirements of AI

In the AI era, traditional perimeter-based security models are no longer good enough. Security needs to be built into distributed AI infrastructure from the ground up, with the understanding that threats could arise anywhere, at any time.

This is one reason that the Equinix Distributed AI Hub is launching in partnership with Palo Alto Networks. The Prisma AIRS solution from Palo Alto Networks embeds AI-powered threat detection directly into our joint customers’ infrastructure deployments. It provides real-time threat detection, centralized policy enforcement and unified governance across hybrid, multicloud and edge environments.

Prisma AIRS is available as a local instance. It provides integrated security capabilities at the edge from the moment of deployment, instead of being bolted on after the fact.

Learn more about the solution and our partnership with Palo Alto Networks.

Solution Validation Centers help take the guesswork out of scaling AI infrastructure

By visiting an Equinix Solution Validation Center (SVC), our customers can test a full menu of our distributed AI infrastructure solutions. This gives them the confidence that comes from seeing their chosen architecture in action before they dedicate the resources needed to deploy and scale it.

Our SVCs are vendor-neutral by design, so customers can incorporate services from our full partner ecosystem in their AI proofs of concept. This includes NVIDIA as a potential hardware provider.

Learn more about Equinix Solution Validation Centers.

Equinix and partners are building the future of AI

One year ago, Equinix and NVIDIA announced an AI Factory solution intended to help our enterprise customers get faster, smarter, more distributed AI infrastructure. This solution helped ensure that our customers were prepared for all the growth and changes we’ve seen in the AI landscape since then.

Now, with the Equinix Distributed AI Hub, they’re ready for what’s next. With help from partners like Palo Alto Networks, our customers are ready to start performing inference at scale. They’re ready to handle massive amounts of data in a secure and compliant manner. They’re ready to connect with the right partners in the right places. And most of all, they’re ready to start building AI that drives real business results.

Read our solution brief to learn more about Equinix Distributed AI.

アバター画像
Jon Lin Chief Business Officer
Subscribe to the Equinix Blog