As the age of AI progresses, agentic AI is maturing quickly. According to Gartner®, agentic AI is on the top of the list of the Top Technology Trends for 2025.[1] AI agents are rapidly enhancing the capabilities and efficiency of AI inference, and we’re seeing more use cases for them across business domains, from HR to marketing to finance to IT. In IT, for instance, agentic AI makes the enterprise application software stack more autonomous and flexible, as multiple agents from different providers collaborate and share information.
There are three major categories of AI agents, according to Accenture[2]:
- Utility agents: Automate basic tasks, gather and sort unstructured enterprise data. These agents are usually provided by SaaS vendors.
- Super agents: Understand user intention and goals, string together utility agents to accomplish a task. These agents can be both industry agnostic and industry vertical specific (they’re either open source or provided by solution providers like GSIs).
- Orchestrator agents: Assign tasks to super agents (or can be called by them) based on business criteria like performance, availability, quality of response and cost. Numerous agentic AI frameworks available in the industry can provide the framework to implement orchestrator agents.
Deploying the right type of agent at the right time helps maintain harmony and efficiency across complex agentic AI workflows.
Increasingly, companies are using multiple large language models (LLMs) from multiple providers because of costs, performance, accuracy, flexibility, availability and privacy. Deploying an agentic AI solution requires access to all these AI models and enterprise data sources, which could be in the cloud(s) or on private infrastructure. Finding a way to address these requirements is a must if enterprises are to meaningfully use agentic AI to solve business problems.
Using agentic AI to improve network operations
Let’s look at an example from IT, in which a network operations (NetOps) team wants to use agentic AI to streamline network management. Keeping enterprise networks operating smoothly is crucial to business operations. NetOps teams are responsible for overseeing network performance, managing potential disruptions, updating network resources, and identifying and mitigating issues. This requires them to access data from numerous sources. Collecting and organizing all that information can take a significant amount of time.
With agentic AI, the NetOps team can find answers and mitigate problems much more quickly. When there’s a problem in the network, a network engineer can submit a query using natural language, and the AI solution can quickly aggregate the data from multiple systems of record and engagement to come up with a potential solution. Similar AI solutions could be used in other business domains where support engineers need to access data spread across multiple systems to address customer challenges.
First, the team must think through the requirements of their AI initiative. Then, they need to design the right architecture to support it. To take the project into production, it’s important that they demonstrate its ROI, showing exactly how it will improve network operations and benefit the business.
Key requirements for an agentic AI inference solution
Before deploying an agentic AI solution, you need to consider your business and technical requirements. Functional requirements are domain specific and define what the AI solution should do, while non-functional requirements define how it should do it. The following table captures some of the fundamental non-functional requirements that need to be evaluated. The third column shows the specific requirements for our NetOps example.
Figure 1: Considerations for an agentic AI solution
Once you’ve asked the right questions and understand the requirements for your AI project, then you can think about the architecture that will support it. The NetOps example focuses on a multicloud architecture. In addition, the team isn’t creating new AI models from scratch. Instead, they’re leveraging existing models and then using retrieval-augmented generation (RAG) inference to optimize the model and enhance its accuracy for their business needs.
The best architecture model to support agentic AI
To address the requirements of an agentic AI solution in a multicloud, multi-model AI scenario, organizations need a hybrid multicloud AI architecture. You can’t do AI exclusively in the public cloud or exclusively on-premises because the required data and AI models aren’t available from a single provider.
Hybrid multicloud AI enables you to access data from numerous sources, in numerous locations, and employ multiple AI models. This is the future of enterprise AI, where organizations take advantage of public and private data, public and private AI models, and public and private infrastructure.
In the NetOps example, engineers need to access numerous data sources to address a network problem. They have utility agents associated with each of their systems of record and engagement. A super agent strings together these utility agents to address network support customer requests. Finally, an intelligent router (an orchestrator agent) is routing each query to the appropriate AI model based on data privacy, the size of the query context, the availability of a given cloud, the accuracy of results and cost. The intelligent router logic is based on a trained classifier that considers the criteria in figure 1 to select the best endpoint (among sending the query to an AI model, to a SaaS utility agent or directly to the end network device). In our example, the orchestrator agent accesses the SaaS utility agents and the AI models in the cloud using Equinix Fabric®, a private high-speed network with predictable performance.
Figure 2 shows the NetOps team’s agentic AI architecture:
Figure 2: Agentic AI architecture with an orchestrator agent routing queries
A hybrid architecture like this is essential for AI for several reasons:
- Flexibility: Enterprises can keep their data assets or vector database in a cloud-neutral colocation data center, which gives them the flexibility to leverage compute and AI models from multiple cloud providers. In the NetOps example, there’s also an AI model running in private infrastructure.
- Data privacy: For the datasets an organization wants to keep private for intellectual property protection reasons, using private AI infrastructure supports data confidentiality.
- Compute power: The size of a company’s private AI infrastructure might be limited, so for nonconfidential queries with large context that require a lot of GPU compute resources, they can use LLMs in the cloud.
- Cost: Backhauling queries and associated data to a central cloud can be expensive, so it’s beneficial to process queries at the edge, where the data and queries originate.
- Availability: If an organization’s primary AI inference deployment is on private infrastructure, they can have a backup in a public cloud that only gets activated during failures.
- Resource bursting: A company might want to provision base infrastructure in a private location but burst for extra capacity (e.g., for peak periods for a couple of months) into the clouds.
Showing the ROI of your AI initiatives
Nearly every enterprise wants to use AI to solve business challenges. But the truth is that many AI solutions get stuck at the POC level and never make it to production. This is because solution builders can’t articulate the ROI of their AI application in a meaningful way. You need to be able to demonstrate how an agentic AI solution will benefit your company: Is it saving employees time? Is it reducing costs?
In the NetOps example, we can easily demonstrate how much time various tasks would take network engineers using manual processes versus the AI search:
Figure 3: Time spent on tasks by network engineers with and without agentic AI
The amount of time the networking team can save by using an AI agent and an intelligent router is significant. With this agentic AI solution, a network engineer can address network issues much more quickly to ensure network availability.
Doing multi-model agentic AI on a vendor-neutral platform
Agentic AI is a promising technology for addressing wide-ranging business problems. To build the hybrid multicloud AI architecture to support it, you need to be on a vendor-neutral platform where you can securely connect with SaaS vendors, clouds, network providers and industry ecosystems at high speeds. To run AI inference at the edge, you need to be in data centers close to where your users are and where data is generated to reduce latency. Being in the right locations also facilitates regulatory compliance for AI solutions.
Equinix offers AI-ready data centers built to provide the power and cooling for large AI inference solutions. Our data centers are in 30+ countries around the world, with 10,000+ enterprises and service providers hosted in them. Equinix also has the highest number of cloud on-ramps in the industry, delivering low-latency, high-speed connectivity to all major clouds.
To learn more about the advantages of deploying your AI solution at Equinix, read about our high performance data center solution.
[1] Gartner, Gartner Top 10 Strategic Technology Trends for 2025, by Gene Alvarez, October 21, 2024.
Disclaimer:
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
[2] Leveraging the hive mind: Harnessing the power of AI Agents, November 2, 2024.


