The Infrastructure Behind AI

Your IT Stack Wasn’t Built for Agentic AI: The Fix? An AI Agent Hub

Autonomous AI workflows are reshaping compute, connectivity, storage and development pipelines

Kaladhar Voruganti
Kevin Egan
Your IT Stack Wasn’t Built for Agentic AI: The Fix? An AI Agent Hub

TL:DR

  • Agentic AI software requires evolved IT stacks beyond traditional LAMP architecture, demanding specialized compute, orchestration & multi-agent workflows.
  • AI agent hubs at interconnection points enable secure, cost-effective management of distributed AI models, data sources & multi-provider environments.
  • Leveraging edge computing ensures real-time performance & minimizes latency for AI-driven agentic systems, even in globally distributed environments.

AI is rapidly transforming software development. Not long ago, humans wrote all code manually, but now, AI generates an increasing share—from basic coding assistants to agentic systems that can plan, execute, evaluate and iterate toward a goal. Traditional software is static and deterministic; it can only be updated via software release cycles. In contrast, agentic AI-powered software is autonomous and adaptive, responding to context, making probabilistic inferences, dynamically solving problems, preventing failures and optimizing infrastructure to reduce costs.

This evolution toward AI-powered software makes user interfaces (UIs) more flexible and user-friendly since they can accept natural input (speech, images, mixed media) and adapt to users. Software providers can give users more personalized experiences. And they no longer have to constantly redesign their UI and roll out updates. They can develop applications faster, reduce repetitive work and provide continuous data-driven updates.

But the shift toward AI-powered software is disrupting the traditional IT software stack. To meet the needs of agentic AI, the software stack and the underlying hardware supporting it must also evolve—and an AI agent hub can help. An AI agent hub is an environment where organizations can create, manage and monitor multiple AI agents, leveraging a multi-agent control plane and secure, low-latency connections for traffic flows across the AI stack.

How the IT software and hardware stack is changing

Software developed exclusively by humans typically uses the traditional LAMP stack (Linux, Apache, MySQL and PHP). Today’s agentic AI software relies on AI models that learn and evolve, which, in turn, creates new technology demands. Software development with AI often involves multiple agents, model context protocol (MCP) servers and AI models from different providers, all working together in a distributed manner. The graphical UI, business logic, API layer and data storage layer of the stack are changing to support these requirements.

The way SaaS vendors deploy and manage AI hardware is also evolving: Workloads are more compute-intensive, requiring special compute accelerators like GPUs, LPUs and TPUs in clusters as well as infrastructure with special power and cooling requirements. Memory, storage and networking all need to be more powerful, and specialized orchestration layers need to manage behavior, reasoning and context.

Figure 1: Software stack evolution from human-written static code to data-driven agentic AI

What does the evolution of software development mean for businesses?

Supporting AI software stacks requires organizations to think carefully about IT architectures, governance and processes, and even company culture. It also has implications pertaining to business risk, costs, application performance and more.

Here are four of the most common concerns we hear from enterprise leaders about this new world of agentic AI software:

  • Security, privacy and model lineage: Can AI agents be trusted? How do we know they’re really doing what they’re supposed to do? How do we orchestrate multiple agents working together? What’s the lineage of our AI models and agents, including where they originated, what data was used for training them, what versions exist and who made changes to them?
  • Cost management: How can we address infrastructure costs for AI software development? The traditional stack setup had a fixed cost that scaled minimally as we added more users, but now that cost model is being upended. How can we afford to use inference models when the cost per token in the clouds increases for every added user?
  • Flexibility: The leading technologies and providers in the AI space can change quickly. How do we ensure we can easily switch between different AI models, AI providers, clouds, data providers and other service providers as our business needs evolve, without having to rearchitect our agentic AI designs? And how can we address costs, compliance, performance, and model and agent capabilities in the process?
  • Software performance: Increasingly, many machine-to-machine use cases require very fast performance across distributed environments. How can we make sure our network latency is minimal for the most latency-sensitive use cases?

These are all valid concerns for enterprises venturing into software powered by agentic AI. Fortunately, all four areas can be addressed with strategic architectural planning and infrastructure placement.

Designing the right infrastructure for the world of agentic AI

Let’s look at some of the ways organizations can plan ahead to support software development with agentic AI.

Addressing security and privacy with secure interconnection

The new AI application workflows are typically multi-provider and multi-agent in nature. Most organizations use more than one external data source, and large enterprises may use hundreds of internal and external data sources in order to improve the accuracy of their decision-making process.  External data sources may include:

  • Market and economic data (e.g., industry benchmarks, interest rates)
  • Customer behavior data (e.g., social media insights, online reviews, web traffic)
  • Geospatial and location data
  • Third-party risk and compliance data
  • Environmental, social and governance (ESG) data
  • Web data (e.g., competitor websites, news feeds, forums)

In the AI world, enterprise applications are consuming this external data in a more intelligent manner via MCP servers, agents and models. Organizations want to increase the variety of external data sources and maintain the flexibility to pivot between different providers based on cost, quality of insights and compliance. To provide secure, cost-effective, low-latency interactions between multiple distributed providers, you need a set of services (what we’re calling an AI agent hub, as shown in figure 2) that can manage the traffic between the agents, MCP servers and models. Clouds, AI PaaS vendors, security providers and GSIs provide services such as guardrails, agent registration, lineage analysis, policy management, agent monitoring and agent orchestration in order to provide a secure control plane for agents to interact with each other. Many enterprises choose to deploy their AI agent hub at places like Equinix because of the rich AI ecosystems and dense interconnection to distributed SaaS and cloud providers.

Figure 2: Multi-agent AI hub at an interconnection data center

Managing IT infrastructure costs with model and MCP gateways at an AI agent hub

It’s well known that doing AI exclusively in the cloud can lead to skyrocketing costs for millions of inference tokens. Using hybrid IT infrastructure helps companies tackle the cost challenges of this new AI-powered software development model. To keep costs more predictable, use private infrastructure as much as possible and use public cloud just for bursting and to access large, superintelligent models.

At Equinix, businesses can design a distributed hybrid multicloud architecture for AI that offers them the flexibility to leverage public and private infrastructure effectively. For instance, you can create an AI hub at Equinix with a multi-agent control plane, AI model gateway and MCP server gateway to get more flexibility to dynamically choose between public and private models, and also between different public service providers.

Staying flexible by storing data at an interconnection hub

A large percentage of AI projects currently fail to transition from PoC to production due to data management challenges. AI is forcing enterprises to revisit their data platform strategy. Increasingly, enterprises are realizing that since their data flows from the edge to the clouds via an interconnection hub such as Equinix, it makes sense for them to store a copy of their data there both for disaster recovery purposes and for increased AI model and services flexibility. In fact, roughly 95% of internet traffic (this includes data generated at the edge) flows through Equinix to its destination.[1] With this approach, they can use the copy of their data at a cloud-neutral, cloud-adjacent location to get more flexibility with respect to accessing AI models and compute resources from multiple providers (without incurring cloud egress costs).

Ensuring optimal software performance

Multi-agent interaction latency can span from multiple milliseconds (machine-to-machine latency) to minutes (when interacting with agents that are doing reasoning). A lot of AI-generated software is for latency-sensitive edge use cases like fraud detection, health diagnostics, autonomous vehicles and retail customer experiences. For many such examples, a network that provides low latency and predictable performance is critical for application performance. You can build a secure, resilient, high-performance network for hybrid multicloud software development environments with Equinix Fabric® and other private connectivity solutions.

How to accelerate agentic AI-enabled software development

A large enterprise was implementing a customer support application. They created a proof of concept in a cloud. However, when they were ready to scale the deployment, the cost per token for leveraging an AI model in the cloud started getting too high. They quickly realized that they wanted two things: (a) flexibility with respect to choosing AI models from multiple providers and (b) to provision for the base workload on a private AI cluster for lower cost and privacy and then burst into the cloud during periods of high loads.

Thus, as shown in figure 3, the company deployed a hybrid multicloud solution including a private AI cluster that leveraged open AI models at Equinix. Since most of their data was being generated at edge locations, they stored it in a data hub at Equinix. This gave them flexibility to switch between private and public infrastructure and between different public clouds. This enterprise used open small language models on their private AI infrastructure as well as superintelligent models in the public clouds. They also deployed AI agent hub services at Equinix to get this multicloud flexibility.

This hybrid multicloud approach helped the company reduce costs and maintain flexibility to work with various cloud, SaaS and neocloud partners. Equinix Fabric enabled private, low-latency connectivity to a rich AI ecosystem of providers. As a result of the new AI architecture leveraging multiple AI agents, the enterprise got more flexibility in deploying its AI application with respect to cost, compliance, scale, functionality and privacy.

Figure 3: Equinix infrastructure for Enterprise customer engagement AI application

Prepare for the multi-agent, multi-provider future

With the rise of agentic AI, the software construct is changing from a monolithic model to a multi-agent model. Organizations need the right infrastructure to support it now and in the future. To be prepared for that eventuality, companies must design an infrastructure environment that offers the security, privacy, cost effectiveness, flexibility and performance that AI-powered software development requires.  More than 10,000 enterprises, clouds, SaaS providers and NSPs interconnect at Equinix today, which makes Equinix the perfect place for enterprises to deploy a multi-agent AI hub.

Learn how you can connect with AI ecosystems and accelerate AI innovation with Equinix Distributed AI™.

 

[1] Diane Brady and Sharon Goldman, What the CEO of the world’s largest data center company—with 273 locations in 36 countries—predicts will drive the business forward, Fortune, November 29, 2025.

アバター画像
Kaladhar Voruganti VP and Senior Technologist
アバター画像
Kevin Egan Senior Director, Technical Solutions
Subscribe to the Equinix Blog