How To Solve For…

To Get AI Right Tomorrow, Get Your Data Architecture Right Today

To use AI to its full potential, you must maintain custody over your data so that you can move it where you need it most

Glenn Dekhayser
To Get AI Right Tomorrow, Get Your Data Architecture Right Today

Large language models (LLMs) have been available to support mainstream AI use cases for less than a year now, and we’ve already seen how much they’ve impacted our lives. In the first two months after its launch, more than 100 million users had tried ChatGPT, making it the fastest-growing consumer app ever at the time.[1] From the enterprise side, businesses have been scrambling to be the first to capture the competitive advantages that LLMs offer. Things seem likely to ramp up even faster now that OpenAI has announced an enterprise version of ChatGPT.[2]

LLMs and other AI applications may seem magical to the uninitiated, but in reality, the results you get are dependent on the quality and availability of the data you feed into those models. This means that how you build your data architecture is the single factor that will ultimately determine the success or failure of your AI initiatives. In this way, LLMs are not so different from many other enterprise IT services that came before them.

Thus far, it’s been the enterprises with the most flexible, distributed data architectures that have been able to make the most of generative AI services, while those that depend on traditional, static data architectures have started to fall behind. Looking toward the future, we have every reason to believe this trend will continue. As LLMs go from a novelty to an ingrained part of day-to-day enterprise operations, the drawbacks of having the wrong data architecture will only continue to grow.

In this blog, I’ll describe what it means to get your data architecture right: building an authoritative data core that allows you to move data from the edge to the cloud and back, without ever having to give up control over that data.

Don’t lose custody of your data core

The most important concept for enterprises to understand when it comes to building their data architecture for AI is that of data custody. This means that you need a place within your data architecture where you can store your data while maintaining complete control and ownership over it. When you’re the custodian of your data, you’re free to:

  • Audit your data and the hardware on which it resides
  • Secure your data against a variety of threats
  • Recover your data in the aftermath of an outage or disaster
  • Analyze it by feeding it into the appropriate tools
  • Utilize the financial model that best meets your needs—OPEX or CAPEX

There are many valid reasons you might want to use cloud services as part of your data architecture, but the risk is that you may lose custody of your data if you aren’t careful about how you access those services. If you build your data architecture directly in the cloud, you’ll face onerous data egress fees that place artificial constraints on your business. These fees make something that should be a quick and simple business decision—like leaving one cloud provider for another that offers better services—into a complex, time-consuming and expensive endeavor.

In a previous blog post, I described what an authoritative data core is, and why it’s so important for enterprises to maintain their data cores outside the cloud. An authoritative data core is not a specific location, but rather a logical layer at the center of your data architecture. You should be able to aggregate data into your core from a variety of sources at the digital edge. Then, you should be able to move that data wherever it needs to go to support different use cases—either upstream to multiple cloud providers or downstream back to edge locations.

In that previous blog, I also described four data motion patterns that allow enterprises to take advantage of cloud services on demand while also minimizing the impact of data egress fees.

Build a data architecture that accounts for the unique requirements of distributed AI workloads

In the case of AI, different workloads have different requirements, and should therefore be hosted in different locations. This is why you need distributed digital infrastructure that can quickly move your data from the edge to the cloud and back.

For instance, you may choose to do model training in the cloud to take advantage of LLM services from a particular provider. However, your data engineering and tuning workloads may include sensitive data that you don’t want to expose to the cloud. You may choose to leave these workloads in the core and use private compute resources or a Bare Metal as a Service offering to process them. Finally, once models have been trained, you’ll want to move them back to the edge to support latency-sensitive inference workloads.

The ideal data architecture for AI or any other advanced data-driven use case should pair scalable storage solutions with agile, programmatic interconnection capabilities, as shown in the diagram below. This allows you to create virtual connections to other clouds or new edge locations as the need arises. This means that data can quickly move wherever it needs to go to support the different workloads mentioned above. Since you maintain your core data copies outside the cloud, you don’t have to worry about high egress fees holding your data hostage. When you’re ready to move on from a particular cloud, you can simply delete the data copy from that cloud and start over with a new copy in a different cloud.

It’s important to note that the authoritative data core is not a recommendation or a best practice; it’s simply the direction in which the industry is heading. All enterprises will eventually build this data architecture, either intentionally or unintentionally. The only difference is that the ones that act with intent now will get to enjoy the benefits of optionality and flexibility much quicker. This means they’ll have the advantage that comes with being able to use the best and latest services—such as those that power AI applications—before the competition does.

How can Equinix help?

An authoritative data core for AI is not something you can buy from a vendor or even quickly assemble yourself. It’s a target you set for your organization, so that you can make future decisions to bring yourself incrementally closer to reaching that target.

Choosing the right digital infrastructure partner is a critical step toward achieving this goal. Only Equinix can offer access to all major cloud providers from many different metros around the world, ensuring the low-latency connectivity needed for multicloud-adjacent storage. In addition, Equinix digital services can serve as key components of your data architecture, providing software-defined interconnection capabilities, multicloud networking, and on-demand single-tenant bare metal compute capacity.

In short, Platform Equinix® provides the ideal foundation on which to start building your authoritative data core, thus ensuring custody over your data and putting your organization in a position to use AI technology to its full potential.

To learn more about how leading organizations are building distributed, interconnected digital infrastructure to maximize their competitive advantage today and future-proof their operations, read the Leaders’ Guide to Digital Infrastructure.


[1] Dan Milmo, ChatGPT reaches 100 million users two months after launch, The Guardian, February 2, 2023.

[2] Rachel Metz, OpenAI Unveils ChatGPT for Businesses, Stepping Up Revenue Push, Bloomberg, August 28, 2023.

Subscribe to the Equinix Blog