3 Trends Driving Artificial Intelligence Architectures

Deploying AI at scale requires data sharing + hybrid multicloud + compute power at the edge

Kaladhar Voruganti
3 Trends Driving Artificial Intelligence Architectures

There’s no doubt that business adoption of artificial intelligence (AI) is accelerating. Challenges remain, however, as companies look to deploy AI prototypes at scale. Whereas a proof-of-concept in a lab or public cloud may only use a few data sources, a typical AI/Analytics application in production will use many external data sources. And with these data sources generating increasingly larger data sets at the digital edge, important decisions need to be made around operationalizing where compute power is placed, how data will be curated and governed and how AI models will be trained and improved, including who to partner with for data sharing.

A Platform for Data Scientists - Inside Equinix

Data scientists want a simplified, more productive workflow that supports rapid iteration, and IT teams want an enterprise-grade platform that scales cost-effectively without breaking budgets. The test drive solution brings together industry-leading AI hardware from NVIDIA and NetApp alongside best-in-class software technology from Core Scientific, all directly connected on Platform Equinix.

Read More
AI_Test_Drive_Hero_Banner750

To solve for these challenges, it’s helpful to look at three key trends driving AI architectures:

Trend 1: Data sharing is essential for AI model accuracy

AI algorithms are only as good as the data used to build them and usually need additional external data sources for more precision and contextual awareness. For example, an AI model built to predict the spread of COVID-19 in a densely populated city like Singapore won’t work well for a large, rural area in the U.S. Additional local data such as climate, demographics, testing status, healthcare system and more must be applied to the AI model for it to provide more accurate predictions. Data sharing between organizations is essential for this to work well but it can be challenging due to data governance and privacy concerns.  Confidentiality requirements vary depending on what data is shared, and this is leading to different types of data sharing models such as the three examples shown below.

Bring data to compute: This data sharing model, currently the most common form of data sharing, is typically used for non-sensitive data. In this model, data providers send their data to a public data marketplace in the cloud for sharing with data consumers. Some enterprises are employing a hybrid architecture where they store their data in a cloud neutral location like Platform Equinix® and move it into the appropriate cloud on demand for relevant AI processing or data sharing with partners.

Bring compute to data: For more sensitive data such as patient or transaction information, enterprises are hesitant to let the raw data ever leave their premises. For example, hospitals want to share information with each other to build more accurate AI models, but, for confidentiality reasons, they do not want to share raw data with individual patient records. In these cases, AI processing is done where the raw data resides. After the raw data is processed, only the resulting insights, anonymized meta-data or AI models are shared.

This type of data sharing paradigm is also driving new federated AI learning techniques and frameworks. In a federated learning framework, analytics and model inference are moved to the edge and only the local AI models are moved to the data centers and clouds for global AI model building and training. This means only the models are moved upstream rather than the raw data as the diagram below shows.

Federated learning also helps to ensure that the raw data is kept in the same location it was generated in for data compliance. Many countries have enacted or are in the process of enacting data residency laws that require data to be kept in a particular geographic location. In these cases, enterprises have to do their AI processing within a particular country and geography.

Bring data and compute to a neutral location: In some cases, the data providers do not want to share their raw data and data consumers do not want to share their AI algorithms. Consortium based data marketplaces within a vendor-neutral global interconnection platform like Platform Equinix® make it easy for enterprises to buy and sell data/algorithms securely and compliantly, as well as build their AI models.  In many instances data is already being exchanged at the network level between different providers and enterprises at a neutral interconnection hub like Platform Equinix. Thus, this is the optimal place to also perform data exchange at the higher level for AI model training.

Trend 2 –Innovative public cloud AI models and services are driving hybrid multicloud architecture

Today organizations across many sectors are not in a position to build AI models from scratch. Instead they want to augment existing AI models with their own contextual data to create new models. Because these pre-built AI models require a vast amount of data and compute to train, they are generally only offered by major cloud service providers (CSPs). Enterprises want to leverage these sophisticated AI algorithms/models in the clouds for processing tasks such as image/video recognition, natural language translation, etc. while maintaining control over their data. Most enterprises will also want to use AI models and services from different clouds for maximum innovation and to avoid vendor lock-in. This is driving a need for distributed, hybrid multicloud infrastructure for AI data processing as shown below.

Platform Equinix provides connectivity to over 2,900 cloud and IT service providers across 55+ metros on a single interconnection platform, enabling enterprises to easily deploy hybrid multicloud architectures with the provider of their choice. And with Equinix Fabric (formerly ECX Fabric®), businesses can easily establish secure, high-speed software-defined connectivity to other locations, partners or businesses with minutes via a self-serve portal. This includes storage as a service (from partners) which enterprises can use to facilitate this hybrid AI model. In addition, with the acquisition of Packet, Equinix also now provides automated bare metal compute as a service which enables enterprises to anonymize their data before moving it to the public clouds for processing.

Trend 3 – Growing data volumes, latency, cost and regulatory considerations are shifting AI data management and processing to a “cloud-out and edge-in” architecture

Data is growing exponentially everywhere, including the edge. For example, a connected car can generate up to 3 terabytes plus data a day, while a smart factory can generate 250x that much data a day.[i] AI processing is moving to the edge for cost, latency and compliance reasons. As shown in the figure below, there are different types of edges which impact where AI processing is placed.

Cloud-Out and Edge-In are two key processing phenomena with respect to how AI is moving from a centralized model to a distributed model.

Cloud-out means some AI processing is moving out to the edge: Both AI training and inference operations are moving from the centralized cloud to the edge as follows:

AI inferencing is moving to the edge:  Many real-time applications such as video surveillance, augmented/virtual reality (AR/VR) or multiplayer gaming cannot tolerate the latency of sending requests to an AI model in the core clouds for a response. For these use cases, AI inference needs to happen at the device edge or the micro edge. In many markets, existing Equinix data centers can provide a round trip latency of < 5ms, and thus, can host these AI inference use cases. Many video surveillance and smart store shopping use cases need round trip network latency between 15-20ms. Equinix data centers are perfectly suited for hosting these AI inference use cases.

AI training is moving to the metro edge: As more data is generated at the edge and IoT datasets are becoming larger, companies do not want to backhaul this data over costly, slow, high-latency networks to a core cloud for AI model training. Also certain types of data must be kept on-premises for data privacy or residency. Over 132 countries have already enacted or are in the process of adopting data privacy/residency laws.[ii] This is ideally suited for federated AI model training techniques to train AI models at the edge (bring compute to data) and then aggregate these local (potentially suboptimal) AI models at a core data center to build better global AI models. Federated learning also helps to ensure that the raw data is kept in the same location it was generated in for data compliance. And moving analytics closer to the edge improves performance and cost efficiency.

Edge-in means deep learning and model training is moving in from the far edge to the metro edge:

AI Inference is moving to the metro edge: Latency sensitive AI inference cannot take place in a public cloud. In many cases, inference operations can take place at either of device, micro or metro edges. However, for cost reasons and data fusion reasons, it is beneficial to move up the edge hierarchy (edge-in). For example, smart cameras can do AI inference at the device level, but these devices can be costly. Alternatively, regular cameras could be used if the AI inference processing is moved higher up in the edge hierarchy to a micro or a metro edge (depending upon the latency requirements). And, except for life critical operations that require less than five milliseconds (5ms) round trip latency, this should satisfy real-time latency requirements at more optimal cost points. Furthermore, in many use cases, data from additional external sources and databases needs to be fused to improve model accuracy. Due to their compute and storage resource requirements, these additional data sources often cannot be easily hosted at device or micro edges. Furthermore, with the emergence of 5G networking technology, more processing can be moved from the devices to the micro and metro edges due to lower latencies and better bandwidth.

AI Training at the metro edge:  Hardware that does the AI model training has high power requirements (30-40KW for fully loaded rack), so it cannot be hosted at the micro edge. Furthermore, most private data centers are also not equipped to handle beyond 10-15 KW per rack, so AI training hardware typically needs to be hosted at a colocation data center. It is also beneficial to colocate AI training hardware at an interconnection rich data center like Platform Equinix  (as shown in the figure below) due to 1) high speed connectivity to multiple clouds and networks, 2) a global footprint so that you can adhere to data residency requirements and 3) a dynamic global ecosystem of nearly 10,000 companies. In many cases the external company that has the required data already has their footprint at Equinix.

Test drive AI as a service at the metro edge

Equinix, in partnership with NVIDIA, NetApp and Core Scientific, is providing an AI as a service test drive sandbox on Platform Equinix with the following benefits.

  • A cloud-native (container-based) service that makes it easy for data scientists to consume AI services
  • An industry-leading AI technology stack consisting of high-performing compute, network and storage technologies that are optimized for running all the major AI software frameworks
  • A global and highly interconnected data center platform that provides high-speed and secure interconnection to IT systems and data sources that are spread across public clouds, private data centers and edge locations.
  • Proximity to the edge in most metros (less than 10ms from the end devices) allows for AI inference and training at the metro edge, helping businesses meet data residency requirements, as well as avoid costly, slow, high-latency long-haul networks.
  • High-speed, low latency connectivity to the public clouds (between 1-2ms to most major clouds in strategic metros) through ECX Fabric facilitates hybrid multicloud AI architectures.
  • Access to the world’s largest digital and business ecosystems of clouds, network providers, financial companies, media companies and other enterprises on Platform Equinix enabling businesses to share and exchange the data and AI models they need to accelerate the development of AI infrastructures, products and services.

 

Learn more about test driving the Equinix AI as a service, powered by NVIDIA, Core Scientific and NetApp.

You may also be interested in watching the webinar on Accelerating Digital Transformation with AI and reading the white paper “Artificial Intelligence: From the Public Cloud to the Device Edge

 

[i] Cisco, Connected Car – The Driven Hour, Feb 2019; IBM, Smart Factory, The average factory generates 1 TB of production data daily.

[ii] SSRN, Greenleaf, Graham, Global Tables of Data Privacy Laws and Bills (6th Ed January 2019). (2019) Supplement to 157 Privacy Laws & Business International Report (PLBIR), Feb 2019.

Subscribe to the Equinix Blog