As the world grows increasingly interconnected, it can be easy to forget that the digital applications we use every day do still have a presence in the physical world. The underlying data centers and network infrastructure that make those applications possible all have to be located somewhere, and exactly where they’re located can have a big impact on performance and user experience.
This is due to the concept known as latency, or more specifically, network latency. Any time data packets cross a network, there’s a built-in upper limit to how fast they can travel. “Latency” refers to the delay that occurs when a system has to wait for data to complete its transit.
All digital transactions incur some level of latency, regardless of the distance or the amount of data in transit. High latency will inevitably lead to network performance issues such as buffering, jitter and packet loss. In contrast, ensuring low latency is essential to make the most of your network’s available bandwidth, and therefore enable advanced digital technologies such as 5G wireless networks, artificial intelligence and machine learning (AI/ML) and automation.
Technically speaking, latency refers only to the one-way delay that occurs while data is passing from Point A to Point B. However, it’s much more common to talk about latency in terms of round-trip time (RTT), or the total delay that occurs from the point an end user initiates a request until the time they receive a response from the application. It makes sense to think in terms of two-way latency, as that’s what determines the extent of the impact on user experience.
Latency is an unavoidable byproduct of distance
Geographic distance between users and systems is the one true cause of network latency. Today’s empowered users tend to think that digital transactions occur in real time, but this fails to account for the laws of physics. Data can only move as fast as the speed of light, which means it will inevitably take some amount of time for data to move from one point to another—the longer the data has to move, the higher the latency.
It’s also important to remember that today’s digital applications depend on a complex web of distributed systems. This means that even when the initial latency between the end user and the application front-end seems manageable, the true end-to-end system latency may be much higher. Multiple systems in different places may have to communicate and share data with one another before the application can fulfill a user’s request. This may ultimately create a delay that’s long enough to impact application performance.
You can lower latency somewhat by optimizing your network for maximum efficiency—including building a more direct route with fewer “hops”—but these technical workarounds can only get you so far. Regardless of the transmission medium used, an end user located 100 km away from an application’s servers will inevitably experience higher latency than one located 10 km away. The only true way to keep latency low is to shorten the distance data has to travel over your network connections. To do this, you must deploy distributed digital infrastructure in proximity to the digital edge. In short, you need your compute and storage infrastructure to be as close as possible to end users and other data sources, so that you won’t have to move your data far in order to use it.
High latency is the bottleneck that prevents your network infrastructure from functioning as efficiently as it should. True network throughput is measured using the bandwidth-delay product. As the name suggests, this metric is determined by multiplying bandwidth (capacity) and delay (latency). This is because sending systems will always wait for acknowledgement from receiving systems before moving additional data. The longer it takes to receive that acknowledgement, the less likely it is that the network will be able to use the full capacity available to it. Networks with both high bandwidth and high latency—so-called long fat networks (LFNs)—may not be able to completely “fill the pipe” at any one time.
Consider two network connections with identical bandwidth but different levels of latency:
- Network 1: 100 Mbps, 50 ms RTT
- Network 2: 100 Mbps, 20 ms RTT
In this simple example, Network 2 has a lower bandwidth-delay product, and will therefore have higher overall throughput. This shows that to truly optimize your networks, bandwidth alone is not enough.
Extremely low latency is required to support many modern use cases
While all applications suffer some amount of performance degradation due to high latency, applications that rely on human-to-machine interaction will generally be more tolerant of latency than those that rely on machine-to-machine interaction. This is simply because machines are capable of performing tasks much quicker than humans. With high latency standing in the way, they won’t be able to perform those tasks as quickly as they would otherwise.
Early digital applications were built at human speed. These applications were more tolerant of latency because the human would always be a bigger impediment to speed than the latency would. If it takes humans several minutes to read a webpage, they’re not going to notice if that webpage takes a couple extra milliseconds to load.
In contrast, many of the advanced digital applications that are emerging today must be built at machine speed. Since the potential top speed is so much higher for machines, it’s a much greater impact when latency prevents them from reaching it. For this reason, many of these machine-to-machine use cases depend on edge compute infrastructure, deployed extremely close to data sources to ensure RTT latency of less than 10 milliseconds.
Use case: Connected vehicles
When it comes to designing fully autonomous vehicles, one of the biggest challenges automakers face is ensuring those vehicles can detect and safely avoid vulnerable road users (VRUs) such as pedestrians and cyclists. Multiaccess edge compute for the automotive industry (MEC4Auto) is the technology that can enable this. It allows autonomous vehicles to tap into data provided by video surveillance systems, mobile devices and other connected vehicles to identify where VRUs currently are and predict where they may be going next.
The MEC4Auto architecture needs to be able to take in a constant stream of data from these diverse sources, process that data to gain insights and then relay those insights back to the vehicle, all within milliseconds. Only by placing digital infrastructure in very close proximity to connected vehicles can we get latency low enough to give those vehicles near real-time visibility into their surrounding environments.
Use case: High-frequency trading
To be successful in today’s electronic markets, capturing the first-mover advantage is essential. High-frequency traders (HFTs) must make decisions based on the most recent information available, which is challenging because that information is always changing. If these firms can’t minimize their response times, they’ll end up making trades based on outdated information. For example, HFTs use algorithms to identify arbitrage opportunities—inefficiencies in the market that they can capitalize on to drive profits. These inefficiencies typically disappear within a few milliseconds, which makes minimizing latency essential.
To keep latency as low as possible, global trading firms need access to digital infrastructure in proximity to all major financial hubs. One powerful example of how infrastructure investments deliver lower latency for trading firms is the new EllaLink subsea cable system between Europe and South America. EllaLink is the first direct connection between these two continents. Previously, traffic moving between Europe and South America would have to pass through the U.S. first, taking an indirect route that contributed to higher latency.
With a direct route that requires only a single hop, EllaLink enables transatlantic latency as low as 60 ms. This represents a 50% network performance increase. The new cable effectively brings markets in Europe and Brazil closer together, helping firms operate successfully across both regions.
Address all the causes of network latency
Since higher latency is driven by technology issues and distance, you need a strategy that can solve for both.
Optimizing technology
First, you need to pick the right technology to move your data. Using a public internet connection to move data can further exacerbate latency and performance issues caused by distance. This is because data crossing the public internet doesn’t follow a direct path from source to destination. Also, it has to share the connection with other users’ traffic.
In contrast, a private interconnection solution such as Equinix Fabric® can provide the most direct route possible between two points. A recent technical benchmark confirmed that Equinix Fabric offers performance benefits of up to 28x compared to the public internet. Most importantly, the benchmark found that the performance benefits increased as the distance of the connection increased.[1] That is, using Equinix Fabric helped offset the impact of latency.
Optimizing proximity
Even accounting for the performance benefits of interconnection, there’s still no substitute for geographic proximity. As we’ve established in this blog, the relationship between distance and latency is part of the fundamental laws of physics. Latency can never be fully eliminated; you can only attempt to minimize it by minimizing the distance your data has to travel.
This is one reason global enterprises are increasingly turning to distributed digital infrastructure with the help of a vendor-neutral colocation partner such as Equinix. By replacing their traditional infrastructure—where all data traffic has to pass through a centralized data center—organizations can decrease the distance their data has to travel, while also taking advantage of the performance benefits of interconnection.
Platform Equinix® offers a global footprint of data centers in 70+ metros across six continents. This means that wherever your business needs may take you, Equinix can help you deploy the digital infrastructure you need to get closer to the end users, industry partners and service providers you need to exchange data with.
For a closer look at our plan for helping customers address both the technological and physical causes of latency—thus gaining a competitive advantage in the digital era—read the Platform Equinix vision paper.
[1] Craig Ledo, “Measuring Performance with Equinix Fabric versus Public Internet”, an ESG Technical Validation commissioned by Equinix. September 2022.