3 Use Cases for Deploying Distributed AI Infrastructure and Applications

How Equinix and NVIDIA are helping enterprises take AI to the next level

Doron Hendel
3 Use Cases for Deploying Distributed AI Infrastructure and Applications

In our last blog on artificial intelligence (AI), titled “Equinix Speeds Distributed AI Infrastructure Applications with NVIDIA,” we highlighted how industry trends are pushing the hosting of AI stacks to the edge, closer to data sources. Compliance regulations that mandate keeping AI data processing and analysis within the country of origin are also providing further justification for placing and interconnecting AI stacks in multiple countries.

In this article, we’ll be discussing three use cases that follow this model, where enterprises are leveraging NVIDIA AI on Platform Equinix® to implement AI solutions addressing development to deployment.

Developing and Deploying Distributed AI: Putting All the Puzzle Pieces Together

This white paper from IDC discusses AI trends at the edge and the core, as well as the compute, storage, and AI infrastructure software stack aspects.

Read More
AI. Circuit board. Technology background. Central Computer Processors CPU concept. Motherboard digital chip. Tech science background. Integrated communication processor. 3D illustration

Leveraging NVIDIA AI on Platform Equinix

According to a recent IDC white paper, Developing and Deploying Distributed AI: Putting All the Puzzle Pieces Together, “One fast-growing deployment scenario is developing AI at the core (cloud or datacenter) and deploying and refining the AI model at the edge or in a colocation center, then retrain at the core.”[1] IDC also makes the point that many colocation vendors provide a full AI Infrastructure as a Service (AI IaaS) at the edge, where the source data would not have to be moved outside the organization’s security domain into a public cloud. In addition, as enterprise AI applications are working off multiple data sources across multiple clouds, private data centers, data brokers and edge locations, most companies require an AI stack in a colocation data center that can serve as an interconnection hub and provide secure, high-speed, low-latency connectivity to these multiple data sources.

In line with these trends, enterprises on Platform Equinix are processing, analyzing and managing end-to-end AI data access and workflows using NVIDIA technology across core and edge locations. The following three use cases illustrate how enterprises can leverage NVIDIA AI Enterprise, NVIDIA DGX Foundry and NVIDIA Fleet Command solutions, which provide easy, secure management and deployment of AI on Platform Equinix, to advance their AI infrastructure and applications.

Use Case 1 – Retail Store

A large retail chain provider sends its in-store camera feeds and inventory management data to an Equinix location, where it leverages the NVIDIA DGX Foundry AI development infrastructure to build AI models for inventory management, employee shift management, shopper buying trends prediction and ads placement. Subsequently, the retailer moves its AI models to store locations to perform near real-time AI model inferencing using the NVIDIA Fleet Command cloud service (enabled by Equinix Metal™) at Equinix data centers in specific metros.

NVIDIA AI Enterprise software available at Equinix provides the AI tools and frameworks retailers use for image classification, such as TensorFlow. Additionally, the retailer wanted to host the NVIDIA Base Command stack on DGX systems at an interconnection hub, such as Equinix, to get high-speed access to external datasets that are located in multiple clouds and data brokers. The retailer located its inference servers in different Equinix metro locations to reduce the amount of data that must be transferred to a central location, and for real-time inferencing at the edge—since Equinix data centers are located within 10 milliseconds (ms) round trip time (RTT) from the end devices at the retail store.

Use Case 2 – Video Surveillance for Buildings

A large real-estate management company wants to analyze video surveillance footage from its various properties for unauthorized behavior. Currently, many alerts are being generated based on detected motion, but the company would like to prioritize and reduce the number of alerts after having AI process them to identify various types of unauthorized behaviors, such as someone jumping a fence, tailgating, or walking in an unauthorized zone. The firm wants touse the NVIDIA Base Commend platform to centrally training AI models to detect anomalous behavior on NVIDIA Base Command at a central location, and subsequently move the AI models to edge locations for AI model inferencing leveraging Fleet Command (enabled by Equinix Metal), close to where the data is generated.

This company has many sites in multiple metro locations, with hundreds of cameras per site. It wants to perform motion detection processing at each of these sites, but also wants to locate its AI inference stack at a single metro location to reduce the cost of hosting an AI stack at each of the stores. The company wants to complete model inferencing close to the edge, in the same metro to reduce the amount of data being transferred to a central location. Furthermore, for privacy and compliance reasons, the company wants to process the data in the region where it is generated.

One fast-growing deployment scenario is developing AI at the core (cloud or datacenter) and deploying and refining the AI model at the edge or in a colocation center, then retrain at the core.” - IDC

Use Case 3 – Automotive Advanced Driver Assistance System (ADAS) Development

An ADAS development team at an autonomous car company requires an AI infrastructure for developing models for its connected vehicles. The amount of data generated by test vehicles is very large—between 20TB and 80TB per car per day (ADAS L2 and ADAS L3). A pool of autonomous vehicles generates these large datasets in a particular metro, and it is costly and time consuming to move these massive datasets to a central location.

The company wanted to use an NVIDIA DGX-based training cluster via DGX Foundry, in combination with NVIDIA Base Command, hosted in Equinix data centers in specific metros to develop its AI models. Additionally, the team wants to locate their hardware-in-the-loop testing equipment at an Equinix data center in the specific metro to test AI models built in DGX Foundry. The team iterates this process to continuously improve the accuracy of their models.

For more information about deploying end-to-end AI infrastructures and applications with Equinix and NVIDIA technologies, read the IDC white paper, “Developing and Deploying Distributed AI: Putting All the Puzzle Pieces Together.”

 

 

[1] IDC White Paper, “Developing and Deploying Distributed AI: Putting All the Puzzle Pieces Together,” Doc ##US48458321, Sponsored by: Core Scientific, Equinix, NetApp, NVIDIA, December 2021.

A pool of autonomous vehicles generates these large datasets in a particular metro, and it is costly and time consuming to move these massive datasets to a central location."
Subscribe to the Equinix Blog