Guide to Storage for AI – Part 2: Where to Put Your AI Data

To get the best performance and optimize storage costs, consider the benefits of cloud adjacent storage

Kaladhar Voruganti
Gabriel Chapman
Guide to Storage for AI – Part 2: Where to Put Your AI Data

As organizations develop their AI strategies and design new AI solutions, figuring out where and how to store data for AI is crucial. Data storage decisions influence the performance, availability, cost and security of data. And the success of an AI project can depend on having the right data in the right place when you need it.

In part 1 of our guide to storage for AI, we covered the different types of storage needed across an AI workflow. In this blog post, we’ll discuss some best practices for choosing where to store your data for AI.

5 best practices for determining where to store your data for AI

1. Put AI storage in a location with excellent network connectivity.

When companies think about storage for AI, network connectivity is sometimes an afterthought. Instead, they’re focused on finding the lowest-cost hosting option. However, for AI inference, network connectivity is crucial, and the cheapest hosting option may not have the connectivity needed for AI solutions. AI inference requires high-speed, secure access to data wherever it resides—across multiple public clouds, at the edge, with data brokers, at partner locations and so forth.

It’s important to have storage and your AI stack in a location where you have high network service provider density and proximity to public clouds—or what we refer to as cloud adjacency. That’s because you may need to connect to numerous partners in your business ecosystem. This access also needs to support high-bandwidth, low-latency connectivity to ensure that performance requirements are being met.

2. Process data close to where it’s generated.

There are several reasons to process data close to where it’s generated:

  • Reduce prices by lowering data backhaul costs
  • Improve performance due to lower latency
  • Ensure data privacy and compliance—meeting country-specific data residency requirements

If data is generated outside the cloud, do data processing and AI model inference outside the cloud. If it’s generated in the cloud, process it in the cloud.

In the past, companies often moved data generated at the edge to the cloud for processing because they valued the simplicity, scale and OPEX financial model. But today, it’s possible to use private infrastructure as a managed service, both in the metro where the data is generated and also at a neutral location close to multiple cloud provider data centers. This approach supports a multicloud posture, security and control over your data, and the agility to pivot from one provider to another without needing to move your data.

In particular, we recommend putting AI inference infrastructure near where your data is generated.

3. Align your AI storage strategy to leverage AI models from multiple clouds or providers.

There’s currently an arms race between different clouds over who has the best AI foundation models. Companies typically need to access AI models from multiple providers for different tasks, so they need the flexibility to pivot between different AI model providers, as well as between public AI (where they upload data into the cloud where the model is hosted) and private AI (where the model is brought to where the confidential data resides) without having to move data or incur data egress charges.

No single cloud will ever solve all your AI needs, so a multicloud posture is important for any data management strategy—giving you the agility to leverage emerging AI services without the burden of costly data migration. So, every AI strategy should account for the use of multiple clouds. Storing data in cloud adjacent locations makes this possible.

4. Look for a predictable cost model.

Cloud storage can come with hidden costs that later catch companies by surprise. For example, with cloud object storage, you pay for reads and writes, as well as for data egress if you move data out of the cloud. Data services in public clouds also incur charges when data moves between regions. Thus, storing data at a neutral, cloud adjacent location helps organizations move from a variable storage cost model to a fixed storage cost model, because they no longer need to pay variable egress fees or per-data request fees. Depending upon the use case, these variable costs can dominate the total cost of storage ownership.

Storage solutions with more predictable cost models can reduce these uncertain costs and make for easier planning.

5. Use storage devices that support multiple storage types and communication protocols.

As we established in part 1 of this guide, you need different types of storage for different phases of the AI pipeline, including both capacity-optimized storage and performance-optimized storage. AI clusters access data from file storage systems, whereas raw data, vector databases and older checkpoints are typically stored in object storage systems. Based on performance and scale requirements, you would typically choose between SSD- and HDD-based physical storage media.

Similarly, for both AI training and inference clusters, you can choose between InfiniBand-connected storage systems, Gigabit Ethernet–supported storage systems or a combination of the two. Thus, instead of procuring storage solutions from multiple vendors for these different workloads, organizations increasingly prefer storage solutions from a single provider that can support all the above requirements in order to reduce their operational costs and simplify their end-to-end storage architecture.

The advantages of an authoritative data core at Equinix

For end-to-end AI solutions, we recommend an approach that keeps an authoritative data core on infrastructure that you control at a network interconnection hub that is also connected via low-latency, high-bandwidth private connectivity to multiple clouds and GPU service providers (cloud adjacent).

Authoritative data core at Equinix

Why should you store data for AI at Equinix?

  • Security and control: By deploying at Equinix, you get dedicated infrastructure that protects your intellectual property, is fully auditable and is fully under your control. No one else can access your private cage.
  • Cloud adjacency and neutrality: You can have your storage infrastructure at Equinix but still access it by compute services executing in the clouds (hybrid cloud architecture). Equinix data centers are connected via high-speed, low-latency, private, secure networks to all the major cloud service providers. Thus, you can get a multicloud AI posture.
  • Predictable costs: Dedicated storage infrastructure at Equinix can help you avoid variable data request and data egress costs from public clouds that increase the cost of storage ownership. Furthermore, by processing data at the edge (where it’s generated), you can minimize data backhaul costs to your remote locations. Equinix has data centers in 70+ metros and these, in turn, help process data at the edge.
  • Flexibility and agility: Keeping an authoritative data core at Equinix gives you the flexibility to try new AI services without having to repatriate that data from the cloud or move it between clouds and cloud services.
  • Storage as a managed service: Leading vendors offer storage as a managed service at Equinix. Thus, organizations don’t need to manage their own physical storage infrastructure.
  • Global connectivity: Equinix offers private, high-speed connectivity across globally distributed locations, making it easy to ingress data from multiple locations and move it quickly to your AI infrastructure.
  • Data governance: Regulations around data management and AI are increasing around the world. With private infrastructure, you have greater control of where data is held. Equinix has data centers in 30+ countries, allowing you to adhere to data residency regulations in your region.

A cloud adjacent storage solution helps you future-proof your AI architecture. Thus, as organizations are figuring out their AI strategy (i.e., public or private AI, or which public cloud provider), keeping their data at Equinix is a no-regrets move. Learn more about the economic benefits of cloud adjacent storage by downloading the Enterprise Strategy Group report.

アバター画像
Kaladhar Voruganti Senior Business Technologist
アバター画像
Gabriel Chapman Former Director, Storage Solutions
Subscribe to the Equinix Blog