TL:DR
- Agentic AI requires reliable, real-time data access across distributed enterprise environments to operate autonomously and avoid becoming a business risk.
- Enterprises need interconnected data foundations with standardized APIs, data catalogs and private connectivity to address data sprawl and fragmentation.
- Dynamic governance with least-privilege access, audit trails and human oversight ensures agents act safely within enterprise security protocols.
While agentic AI is still an emerging technology, it’s gaining traction quickly. We’re already seeing a wide variety of use cases, including:
- Software engineering tools like GitHub Copilot Coding Agent that automates planning, coding and validation, accelerating development pipelines
- Business process automation solutions like Salesforce Agentforce to automate tasks and handle complex scenarios across sales, marketing and operations
- Customer support services like Klarna’s AI customer service assistant, resolving issues like refunds in under two minutes[1]
In fact, customer service agents are familiar to many consumers already. “By 2029, agentic AI will autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs, according to Gartner®.”[2] And this is only the beginning. There are many more agentic AI initiatives on the horizon, and enterprises need to prepare their data if they want to employ agents safely and effectively.
It’s well known that AI requires rich data, but agentic AI amplifies this since data also needs to be contextualized. While data-related challenges are nothing new for enterprises, strong data management is more important than ever in the age of autonomous agents. Agentic AI will only be as effective, trustworthy and safe as the enterprise data foundation it operates on. Organizations urgently need to modernize how they manage, govern and interconnect data if they want agents to act in the business’s best interest.
Let’s explore three of the main challenges and what you can do about them.
1. Addressing data availability and continuity for autonomous operations
The challenge
AI agents need reliable, real-time access to data in order to operate autonomously. An agent that can’t reach the right data, at the right time, can become a business risk rather than an asset. But most organizations today face significant data sprawl, latency and sovereignty constraints that make it difficult to meet this need.
“Data sprawl” refers to the reality that enterprise data exists in many places, such as:
- On-premises environments
- Multiple public clouds and SaaS applications
- Core sites and edge locations
- Backup systems
As data is transferred between systems and locations for AI, network latency can become a significant problem. Agentic AI workflows often involve complex decision loops and require dynamic data access. Delays, outages and other network performance problems can interrupt AI workflows and hinder agent accuracy and responsiveness.
Further complicating matters, regulatory compliance requires some data to remain in deterministic geographic locations—what’s known as data sovereignty. However, that data still needs to connect to remote enterprise workloads to support agentic AI.
The solution
To address these data availability challenges, enterprises must build an interconnected data foundation and establish sources of truth near compute infrastructure. Data sprawl might be unavoidable for a modern enterprise, but you can mitigate some of the challenges of distributed data with robust network infrastructure.
Agentic AI requires highly interconnected IT infrastructure with low latency to effectively move data across complex environments including multiple clouds and regulated workloads. Deterministic performance is important here: The internet simply isn’t reliable enough for mission-critical agent tasks. And because at least some of the enterprise data used for agentic AI is also sensitive and proprietary information, it’s a good idea to leverage private connectivity solutions that offer better security and control.
2. Tackling data coherence and context for actionable intelligence
The challenge
For agentic AI, data location and availability are only part of the problem. Enterprises are also contending with the rapid growth of structured and unstructured data, which is accelerating data sprawl and fragmentation. As data proliferates across systems, clouds and formats, it’s increasingly difficult to connect the right information and present it with the context agents need.
Data fragmentation occurs when data is distributed across systems without coordination or consistency. When data is fragmented, even humans will struggle to extract value from it, and unifying it for AI agents is even harder. Lacking context for data, an agent might misinterpret, use stale data and/or take inappropriate actions. Contextualizing data means presenting it with all the information agents need to interpret it correctly and act on it safely.
The solution
To address these challenges, enterprises need to standardize how agents access data by creating a coherent access layer. With this approach, they can reduce bespoke interfaces and expose data through standardized APIs, services and query mechanisms, allowing all agents used by the organization to interact with distributed data in a predictable way.
Data catalogs support this model by adding essential context, such as data ownership, freshness, quality and source. Standardized access, in combination with contextualization, helps agents find the right data and act on it reliably and safely.
3. Mastering dynamic governance and control for safe autonomy
The challenge
Organizations also face a host of risks if agent actions aren’t fully governed, monitored and auditable. This includes regulatory, security and operational risks, as well as exposure to uncontrolled agents.
These challenges include:
- Human-biased data access/controls that weren’t designed for autonomous systems
- Missing data lineage and traceability, making it difficult to track what informed an agent’s decisions
- Stale, outdated data, which can lead to inefficiencies and incorrect actions by agents
The solution
To govern data access for agents, enterprises need to employ dynamic access policies, leveraging “least privilege” and “just in time” principles. Least-privilege access ensures agents only have access to specific data needed to complete a task, and nothing more. Typically, “just in time” access means dynamically granting permission only when an agent needs it and then immediately revoking it, but with the evolution of AI tooling, it can also refer to the connectivity paths. Specific enterprise workloads have long benefited from air-gapping as a security mechanism, and with tooling like the Model Context Protocol (MCP) built into networking constructs, this isolation can also be applied to agent communication, strengthening security postures by permitting network connectivity (along with permission access) only for the lifespan of the agent itself. These policies minimize risk by enforcing boundaries around what data agents can use, and when.
Tracking data and agent lineage, including who created the data, who owns it, when it originated and how, is crucial to ensure the integrity of data across the data pipeline. As enterprises rely more on agentic AI, agents will create their own datasets that need to be checked against historical data. Mandatory audit trails are essential to ensure traceability and understand how agents are acting.
Businesses should also maintain human oversight of agents, as there are things they shouldn’t have full autonomy to do without a human in the loop. Red-team testing can be used to test agent behavior and ensure they’re operating in accordance with enterprise security protocols.
To ensure agents are working with the freshest data, companies need disciplined governance and data lifecycle management. Reducing latency between data creation and consumption is important, as is tracking freshness explicitly.
The infrastructure agentic AI needs
Even if you’re not exploring agentic AI in your organization yet, good data management is an important foundation that paves the way for AI success in the future. In the end, it’s data readiness, not AI model readiness, that’s the real bottleneck for most businesses.
As agentic AI takes off, reliable, secure, low-latency private connectivity is becoming more critical than ever, and it can help companies mitigate risk. Agents need to integrate with data all across an enterprise’s architecture, in many locations and from many sources. Private connectivity solutions like Equinix Fabric® can provide critical paths for agent communication and data availability. The Equinix Fabric MCP server provides a standard interface for agents and models to interact with the Equinix Fabric API to enable multiagent communication in real time.
Zetaris wanted to build a modern data lakehouse for AI, enabling the implementation of agentic AI and other AI applications. They needed high-performance infrastructure to support it. Zetaris deployed infrastructure in Equinix, where they could easily scale, connect to rich AI ecosystems and access secure, private connectivity solutions. With the help of Equinix high-performance data centers, Zetaris is enabling enterprises to realize their agentic AI visions.
Read the case studyTo learn more about distributed AI solutions at Equinix, download the distributed AI solution brief.
[2] Gartner Press Release, Gartner Predicts Agentic AI Will Autonomously Resolve 80% of Common Customer Service Issues Without Human Intervention by 2029, March 5, 2025.
Disclaimer:
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.