Where is the Digital Edge?

To balance latency and compute requirements, enterprises must deploy the right infrastructure in the right places

Kaladhar Voruganti
Oleg Berzin
Where is the Digital Edge?

The growing importance of the digital edge has perhaps been the defining story in IT over the last several years. Recent events, like the move to remote work and the buildout of 5G, have accelerated a shift that was already underway. The traditional approach to IT infrastructure—where all traffic flows through a core data center or cloud location—no longer makes sense in a world where enterprises need to limit latency and reach more end users in more places.

The old way of doing things was all about moving the data to the compute. Now, data sets are growing ever larger, and we’ve reached the upper limit of how quickly we can transport them over long distances. To continue supporting data-intensive applications without being overwhelmed by latency and data transfer costs, enterprises have begun moving compute capabilities closer to where data is generated—that is, they’ve begun deploying infrastructure at the digital edge.

The right approach to infrastructure at the digital edge can unlock significant technical and business value. A commissioned report from 451 Research, part of S&P Global Market Intelligence, put it best:

“The entire technology sector, driven by evolving enterprise digital requirements, is working to enable edge as part of a disruptive pendulum swing of digital infrastructure capacity away from the centralized cloud and toward end users, devices and machines. Delivered in a modern, cloud-native format, the edge is one of the most important digital enablers, evolving alongside and directly reinforcing AI/machine learning (ML), IoT, public cloud and 5G.”[1]

Download the leaders’ guide to digital infrastructure

Learn how 50%+ of the Fortune 500 have leveraged Platform Equinix to implement and capitalize on their digital-first strategies.

DOWNLOAD THE GUIDE
Leaders' Guide to Digital Infrastructure

Despite the focus enterprises are placing on it, many organizations find it difficult to reach an internal consensus about exactly what the digital edge is, or more importantly, where the digital edge is. In this blog, we will attempt to provide some clarity around this question.

Enterprises deploy at different edges for different purposes

The reason the digital edge can be so confusing for some business leaders is that it doesn’t exist in any one physical location. In reality, there’s a hierarchy of edges where businesses might deploy infrastructure. A single application consisting of multiple microservices can deploy different parts of the application (microservices) at the different edge locations in the hierarchy.

Presently, there’s no standardized nomenclature in the industry for the different types of edges in the edge hierarchy. Various industry analysts, CSPs and NSPs all have different names and definitions for the different types of edges. We at Equinix define the edge hierarchy as shown in Figure 1.

Figure 1: Edge Hierarchy

However, irrespective of what the “edge” locations are called by different organizations, the following are some key attributes that help define the different types of edges:

  • Latency or distance to the edge device: Round-trip network latency from the end user device to the particular edge location is the key attribute that helps differentiate between the different types of edges in the edge hierarchy. Depending on the application use case, different edge locations will be more suitable.
  • Infrastructure capabilities: The amount of physical space, compute capability, storage, networking and power/battery capabilities provided also helps differentiate between the different types of edges. For example, AI training workloads require a lot of computation power, and this amount of compute is typically not available at the device, far or micro edges.
  • Number of devices and hierarchy: There are usually a greater number of edges as one traverses down the hierarchy, closer to the end devices. Traffic from edges further down the hierarchy gets aggregated or analyzed at edges higher up in the hierarchy. The edges higher up in the hierarchy have access to the information from multiple edges lower in the hierarchy.
  • Single-tenant or multitenant: Typically, device edges are single user-based. There can be multiple applications running on device edges, but they are usually within the security domain or control of a single user. Far edges can be either single tenant or multitenant, depending on location. For instance, the far edge in the closet of a retail store is usually single-tenant, whereas the far edge in a stadium or apartment building is usually multitenant. Similarly, micro edges can be controlled by a single NSP-CSP combination, or they can also be multitenant. Resources at all the edges can be either physical or virtual (VMs, containers), but these days, they are increasingly likely to be virtual.
  • Type of owning organization: Edges can be owned by different entities. For example, device edges can be owned by either individuals or corporations. Far edges are usually owned by NSPs, but increasingly, apartment building, parking lot, mall and stadium operators own far edges and rent out space to NSPs or network equipment manufacturers to host their equipment. Micro edges are usually owned by NSPs in partnership with CSPs, but data traffic aggregation locations are often used as micro edges as well. Macro edges are typically operated by colocation data center, cloud and network service providers.
  • Type of functionality: Historically, far edges and micro edges were used by NSPs to run their mobile or networking services. For the most part, device edges run end-user applications. In the 5G world, application-level services are increasingly hosted in far edges and micro edges. Macro edges host both network interconnection services and application-level compute and storage services. Clouds mostly host application services but are also beginning to host more mobile networking core services. In addition, due to advances in 5G and edge computing, some applications that would have typically run on user devices can now run in micro or even macro edge locations. Examples include virtual desktops and virtual set-top boxes.
  • Security: Typically, the nodes higher up in the edge hierarchy (macro edges or cloud nodes) provide greater levels of physical security for hosting edge infrastructure. Automated video surveillance and software-level encryption and authentication provide some security at the device, far and micro edges, but since these locations are not manned 24×7, they are susceptible to physical threats such as break-ins or vandalism.

As evident from the discussion above, new types of edges are getting inserted in the edge hierarchy to address application data transfer costs, network connectivity, and latency and privacy requirements via microservices. There is no single architecture that’s suitable for all use cases. Different applications will leverage different combinations of edges in the edge hierarchy.

Understanding the digital edge through examples

Here are some concrete examples that illustrate how the various types of edges are utilized in different scenarios.

Distributed AI

The two main workloads required to enable AI are model training (creating models based on sample data sets) and model inferencing (extracting insights by feeding new data into trained models). These capabilities have drastically different requirements when it comes to compute, storage, power and latency.

Model training requires organizations to store and process very large volumes of data. For this reason, larger data centers (like those found at the macro edge) or public clouds are usually best for supporting model training. In contrast, AI model inferencing is increasingly occurring at the far edge, micro edge or macro edge, for cost, privacy and latency reasons. If you’re using AI models to support critical decision-making, you’ll want to ensure you’re feeding real-time data into those models.

Similarly, it’s expensive to move very large data sets to a central location. In these situations, it makes sense to inference the data sets at the far edge, micro edge or macro edge (in the same metro) locations. In some cases, it also makes sense to do AI model training at the macro edge in a federated manner, to reduce data backhaul networking costs.

Decentralized network management

Historically, enterprises routed traffic from branch locations to a centralized location to apply network security policies before routing the traffic to the clouds. However, the amount of traffic generated at the edge has dramatically increased (even more so during COVID), leading enterprises to apply security policies at a macro-level edge in each metro before routing that traffic to the cloud.

Data residency compliance

Increasingly, most countries are creating and enforcing rules requiring the private data of their citizens to be processed within their borders. This, in turn, is making enterprises deploy IT stacks in different countries to process personal data. Typically, these country-specific stacks are hosted in a macro-level edge or a country-specific cloud availability zone.

Metaverse

Solution providers create digital twins of real-world artifacts they want to use in the metaverse. Subsequently, the digital twins are used and traded in the metaverse. Creating digital twins is compute intensive, whereas using them after they’ve been created is latency sensitive. The digital twin model building work can be performed either in public clouds or in macro-level edges that can host large compute and storage farms.

Subsequently, depending on the latency of the corresponding use case, the digital twin is inferenced at the appropriate location in the edge hierarchy. Typically, AR/VR headset applications requires very low latency to ensure the headset wearer doesn’t get giddy due to latency delay. For this reason, the digital twin is used either at the device edge, far edge or micro edge. There are other metaverse use cases (such as Zoom classrooms) where higher latency can be tolerated; these use cases can be satisfied even by macro-level edges.

Geographic content caching

Content providers typically deploy caches at macro edges in various metros to deliver their content to end users with low latency, and for reducing network data transfer costs compared to accessing content from a central location. Increasingly, content providers also provide caches that allow for data processing and filtering to take place, reducing network traffic to a central location.

5G

Radio access network (RAN), mobile core and multi-access edge compute (MEC) are the three main components of a 5G network. RAN components help convert wireless signals into IP traffic. The mobile core components handle signaling (e.g., device attachment, user authentication), policy (e.g., usage quotas and access control) and traffic (user plane) routing. MEC allows for the hosting of application services as part of the 5G infrastructure at the edge.

The RAN portion of the network consists of radio (RU), distributed (DU) and centralized (CU) units. The RU is located at the far edge, the DU is located close to the RU (at either the far edge or micro edge), and the CU resides at either the micro edge or the macro edge.

The mobile core portion of the network can be hosted in a public cloud or at the macro edge. One very important 5G core component is the user plane function (UPF), which is responsible for mobile traffic routing before it reaches MEC. The UPF can be placed at the far edge, micro edge, macro edge or cloud core, depending on latency and throughput requirements. MEC is typically located after the UPF at the far edge or the micro edge, to satisfy the low-latency application requirements and to process the large amounts of data being generated at the edge. Figure 2 shows the different ways 5G services can be deployed across the different types of edges.

Figure 2: Edge Hierarchy for 5G

Take a data-first approach to designing your edge infrastructure

As the examples above show, you must first determine exactly what you hope to accomplish before you can determine where it makes sense to do it. Start by looking at your use case, and then work backward from there. Where is your data currently? Where are you hoping to move it, and for what purpose? What technical challenges might stand in the way of you achieving your goals? Does it make sense to take an edge-in or cloud-out approach, or some combination of both? Only after you’ve thoroughly considered these questions can you make an educated decision about what your ideal edge infrastructure might look like.

As shown in Figure 1, Equinix IBX® data centers can satisfy the computation, security, and latency requirements of the majority of existing use cases. In addition, Equinix Fabric™, our software-defined interconnection solution, can help you interconnect different edges in your solution. Furthermore, Equinix offerings like Equinix Metal™, which provides automated Bare Metal as a Service, and Network Edge, which offers virtual network functions from top providers, can aid in running a distributed application stack. Together, these capabilities can help you scale infrastructure at the edge, quickly and cost-effectively. Figure 3 shows the different metros across the world where customers are currently deploying their edge footprints.

Figure 3: Equinix Fabric Global Footprint

To learn more about how Equinix can help you build out your ideal edge infrastructure, read the Platform Equinix vision paper today.

 

[1] 451 Research, part of S&P Global Market Intelligence, “The Role of Datacenter Services in Multi-Access Edge Computing”. Rich Karpinski, May 2022.

 

アバター画像
Kaladhar Voruganti Senior Business Technologist
アバター画像
Oleg Berzin Senior Distinguished Engineer, Technology and Architecture, Office of the CTO
Subscribe to the Equinix Blog