Cloud vs. Edge

When to go cloud-out and edge-in

Kaladhar Voruganti
Jacob Smith
Cloud vs. Edge

The amount of data generated at the edge by both machines, businesses and humans is increasing at an exponential pace. Hence, data has gravity, and for cost, performance, and privacy reasons, it is increasingly untenable to move massive datasets to a centralized location for processing. We are entering the world of computing where instead of moving data to a centralized location where compute is plentiful, it is desirable and possible to move the compute close to where the data is generated. This phenomenon is accelerating the era of edge computing.

Edge computing fuels the Fourth Industrial Revolution and massive opportunities

According to the Linux Foundation’s 2021 State of the Edge Report[i], “As a natural extension of cloud computing, the edge cloud construct is increasingly viewed as a key enabler for the ‘Fourth Industrial Revolution’ in which the widespread deployment of the Internet of Things (IoT), the global sharing economy and the increase of zero marginal cost manufacturing deliver unprecedented communication-driven opportunities with massive economies of scale.”

There are four factors that are driving enterprises to the edge, including:

  1. Reduced latency
  2. Optimized bandwidth utilization
  3. Requirements for offline or autonomous operation
  4. Adherence to regulatory or security guidelines based on physical location, such as a province or country.

Many types of industries — from gaming and digital content delivery to artificial intelligence (AI) and augmented reality/virtual reality (AR/VR) — are impacted by one or more of these factors.

The Future of Digital Leadership

Discover how Equinix is helping digital leaders assemble infrastructure in mere minutes to build digital advantage, boost agility and deploy faster.

Download
the future of digital leadership

Edge computing basics

The internet is incredibly diverse. As such there are many “edges” depending on what you’re looking at, so it helps to think of them as a hierarchy of edges. The layers in the hierarchy vary with respect to the round-trip latency from the end devices, the amount of compute/storage capacity and the total number of these edges at a particular level. For example, there are millions of device edges, thousands of far edges, hundreds of macro edges and tens of public clouds. It is important to note that there is no single architectural template that satisfies all edge computing use cases.  Furthermore, edge computing happens in conjunction with activity in more centralized public clouds, and thus, edge and cloud computing should be viewed as complementary.

The following definitions of terms, as illustrated in the diagram below, will help set the stage for digging into the technologies developed to support various use cases for edge-in and cloud-out design patterns.

  • Device Edge: Smart phones, smart cars, surveillance cameras, etc. are examples of device edges. There are billions of device edges and they vary in their capabilities with respect to compute power, network connectivity, and power/battery capability.
  • Far Edge: These are edges that contain a small amount of compute power (2-5 racks) and they exist in stadiums, cell tower stations, store closets, apartment basements, parking lots, etc. These edges are typically within 1-5 ms round trip time (RTT) latency from where the data is generated.
  • Micro Edge: These are cloud edge zones (e.g., AWS Wavelength), telco central offices and traffic aggregation points. These edges are within 5-10ms RTT latency of where the data is generated.
  • Macro Edge: These are interconnection hubs, multi-tenant metro-level data centers and cloud local zones. These edges are typically within 10-50ms of where the data is generated.
  • Clouds: These are mega IaaS, SaaS and wholesale data centers that have massive compute and storage capabilities. These data centers are typically within 50-100ms RTT latency from where the data is generated.

Key developments in edge computing technology are powering modern digital infrastructure

The confluence of technological advancements in the following key areas is accelerating the growth of edge computing architectures:

  • Container technology: With the advent of container-level virtualization (e.g., Docker containers) and distributed container orchestration technologies (e.g., Kubernetes), it is now possible to construct distributed control planes that allow for the movement of compute close to where the data is getting generated. Thus, one does not need to move massive datasets to a centralized location for processing.
  • Denser processing capability: The annual increasing GPU processing capability (Huang’s law) and storage density (e.g., 1 petabyte (PB) of flash storage in 1 unit of rack space) is making it possible to do large computations in a relatively small amount of physical space at the far and micro edges. For example, it is not necessary to move large datasets to a central cloud for AI model training purposes. Instead, training and model inference operations can now be performed in a federated/distributed manner at the edge.
  • High speed 5G networking: With the emergence of high bandwidth and low-latency 5G wireless networks, computation can now be off-loaded from edge devices onto other types of edges in the edge hierarchy. This saves battery power of edge devices as it is now possible to move compute heavy work to nearby edge locations. It is important for mobile application developers to adopt new software development models in order to truly take advantage of 5G networks.
  • Secure computation technologies: Increasingly, as compute moves to the edge, the providers of algorithms want privacy for their algorithms (their secret sauce). Similarly, data owners at the edge do not want to ship their raw data or local insights via the public internet to a central location for aggregation. With the emergence of federated learning and other secure computation mechanisms (differential privacy, homomorphic encryption, secure multi-party computation), it is now possible to securely process data at the edge while alleviating the privacy concerns of both data and algorithm providers.

The Future of Digital Leadership

Discover how Equinix is helping digital leaders assemble infrastructure in mere minutes to build digital advantage, boost agility and deploy faster.

Download
the future of digital leadership

What is edge in/cloud-out architecture?

This observation shared by our president and CEO Charles Meyers during an interview on theCUBE [i] is apropos as companies explore the options for moving computing and data processing out from the cloud and into the edge – by Charles’ definition either the cloud or the edge could be an enterprise’s core.

It is important to note that there is no “one size fits all” solution for edge architectures. Instead, depending upon the latency, security/privacy, cost and availability requirements, a single application architecture can span across the various layers of the edge hierarchy.

At Equinix we are seeing “cloud-out” and “edge-in” architectural design patterns at the edge, where customers are creating distributed architectures that span across the edge hierarchy, as shown in the diagram below, for cost, performance, privacy and availability reasons.

  • Cloud-out refers to moving application processing to the edge from the core/central cloud for performance (latency), data transfer cost, compliance/privacy and availability (disconnected mode of operation) reasons. It is important to note that usually, a part of the application still runs in the central cloud. With the advent of container technology, it is now possible to move compute to where the data is getting generated. Depending upon the requirements of the use case, processing is done at the appropriate type of edge in the edge hierarchy. Cloud-out use cases include enabling cloud services such as AI/ML, real-time analytics and security at the edge — right next to where the data is created. Data gravity at the edge makes it logical to move the compute to the data rather than to the cloud. As cloud hyperscalers recognize that much of their customers’ data must remain at the edge for latency and security requirements, they’re developing on-premises solutions, such as AWS Outposts, to run the first copy of the workload where it originated and then store a second copy on the cloud. From the context of AI, it is already common to train an AI model in a cloud, and then subsequently move the model to the edge to do model inference. However, increasingly, application architects want to also do model training at the edge because organizations do not want to ship raw data to a central location for model training. We are now entering the era of federated-AI where organizations are building local AI models based on local data, and then shipping and aggregating the local model weights at a central location to create a global model. Federated AI helps to both reduce the cost of shipping raw data to a central location for model training and it also helps to preserve data privacy by shipping only model weights and not the raw data to a central location.
  • Edge-in refers to an enterprise designing and building an architecture for edge applications, servers and gateways where there is an application footprint at an edge location. Usually, the microservices at the edge run in conjunction with the services that are running in the clouds, and they are coordinated in an integrated manner via a distributed control plane. Applications want to process data at the edge for performance, cost, security/privacy and availability reasons. However, in the edge-in phenomena, we are noticing that organizations are not placing their edge infrastructure at every possible far edge or micro edge location, but instead are placing their edge infrastructure at the metro level for cost, data aggregation, and performance (latency) reasons. For example, if a convenience store has twenty branches in a particular metro, and they want to process real-time video feeds for surveillance and to improve customer shopping experience (e.g., present digital coupons based on the shopper’s location in the store), they need real-time response that cannot be satisfied if they send the video feeds to a public cloud. Going back to the AI example, businesses don’t want to install an AI inference stack at each of their locations in a particular metro for cost reasons, but instead, have an AI stack in a single location in the metro (edge-in). Similarly, with the advent of 5G networks, instead of having computation performed in each of the end devices, one can move up the hierarchy (edge-in) and perform processing in a micro-data center because one can still satisfy the required stringent latency requirements.

Building out your edge on Platform Equinix®

Today it is critical for enterprises to put workloads in the right places, connect to the right partners, and orchestrate for automation, control and security to leverage the most possibilities. Equinix provides a global platform for creating and interconnecting digital infrastructure to give businesses a competitive advantage. For example, by placing digital infrastructure at the metro edge on Platform Equinix in global Equinix International Business Exchange™ (IBX®) data centers, enterprises and service providers gain proximity to edge services at scale. This includes access to Equinix Metal™, an automated, interconnected bare metal-as-a-service (BMaaS) that enables born in the cloud businesses to go cloud-out and edge-in to scale their compute and storage resources on-demand as easily as if they were in the public cloud.

Equinix Fabric™ provides seamless software-defined interconnection to more than 10,000 businesses on Platform Equinix and Network Edge delivers virtual networking and security services that help companies modernize their networks and deploy digital infrastructure at the edge virtually, in minutes. These solutions enable our customers to deploy agile and resilient edge services and access between the metro edge and public clouds without additional CAPEX.

As the world’s digital infrastructure company, Equinix gets businesses into more metro edge locations via its global footprint, enabling digital leaders to create and interconnect foundational digital infrastructure building blocks on-demand. Equinix enables digital leaders, in particular cloud native companies, to architect their edge on a trusted, global platform. Leveraging Equinix Metal to consume compute and storage infrastructure as a service and Network Edge to deploy virtual networking and security in an OPEX model, saves on deployment time and costs. And Equinix Fabric seamlessly interconnects it all together across our global IBX portfolio.

By placing digital infrastructure (network, compute and storage) proximate to public clouds at the edge, your company can deploy cloud-out/edge-in architectures that easily and cost-effectively lower application latency, improve performance, reduce costs and deliver greater data privacy/compliance.

To learn more, read the Platform Equinix Vision Paper.

 

[1] 2021 State of the Edge Report, The Linux Foundation, 2021

Avatar photo
Kaladhar Voruganti Senior Business Technologist
Jacob Smith Former VP, Bare Metal Strategy & Marketing
Subscribe to the Equinix Blog