Editor’s Note: This blog was originally published in May 2019. It has been updated to include the latest information.
Can you really compare network speed with network bandwidth? Though interrelated, they are two very different things. While network speed measures the transfer rate of data from a source system to a destination system, network bandwidth is the amount of data that can be transferred per second—essentially, the size of the pipe. Combine the two, and you have what is known as network throughput.
One would assume that high-bandwidth networks would be fast and provide excellent throughput. In reality, this is not always the case—not when you throw latency into the mix. Latency is the delay packets experience while moving through a network. It’s typically the culprit behind poor application response time and frustrated users. There is always some latency overhead in a network because of physics—specifically, the speed of light.
Light travels at about 200,000 kilometers per second through a single optical fiber, roughly 31% slower than the speed of light in a vacuum. This means that every 100 km (~62 miles) a packet travels over a network adds about a half of a millisecond (.5 ms) to the one-way latency, or 1 ms to the round-trip time. You also have to consider that cable systems don’t always follow the most direct route from one point to another. For example, you could traverse 12 km of cable to go 5 km of point-to-point distance. And as the distance between two points grows, latency grows as well.
Leaders’ guide to digital infrastructure
In today’s rapidly growing digital economy, your business may be struggling to stay relevant. To be competitive, you need to keep up with the shift in global demand for digital services.Download Guide
The distance between two systems is an important factor that contributes to latency, but it’s not the only one. Others include the number of hops (bridge, router or gateway points) along the way, large packet sizes (video files or encrypted data), jitter (the variance in time delay between packets) or network congestion (too many bits in the pipe). All of these factors can cause data packets to be dropped and then retransmitted, resulting in more latency. As an increasing number of data packets are retransmitted over long distances, they consume greater amounts of available bandwidth, thus degrading network performance.
Emerging 5G networks will help support the trend for real-time applications at the edge by reducing the latency from the device to the nearest antenna/tower. This is just one segment a packet will cross in the journey to a cloud or data center. Providers are relying on proximity and interconnection to optimize the end-to-end workflow, including data center transmission and end-node processing.
The question that network architects must answer is: “How much latency can your company afford?” The answer is: “It depends on exactly what you’re trying to accomplish.” Different use cases have different levels of latency sensitivity. For example, connected vehicles need access to near real-time data from a variety of sources to identify and safely avoid vulnerable road users (VRUs) such as pedestrians and cyclists. Minimizing latency is one of the requirements for making self-driving vehicles a viable use case at scale.
In addition, high-frequency trading (HFT) firms need to reliably execute trades in a matter of mere microseconds. If latency prevents them from doing so, opportunities will pass them by before they get a chance to take advantage.
To protect your valuable investment in your network bandwidth, you need to reduce latency. The most effective way to reduce latency is to remove physical distance with a combination of proximity between your business and your critical counterparties and direct, secure private interconnection.
Colocation for proximity + private interconnection = latency reduction
Proximity is defined as the nearness in space, time or relationship. So how does proximity factor into the speed/bandwidth equation? Closing the physical distance between two points automatically reduces latency. Lowering latency increases available bandwidth and enables faster application response times. Since reducing latency is crucial for digital business, shortening the distance between users, applications, data, clouds, IoT devices, partners, customers and other participants will result in much greater network speed and improved application performance.
Proximity to digital and business ecosystems in a colocation data center delivers other high-value benefits to your business. Interconnecting your business directly and securely with counterparties within globally distributed IT traffic exchange points at the digital edge accelerates digital transformation. Proximity to increasing amounts of data that can be shared with employees, partners and customers at the edge is also advantageous, as it provides faster and more accurate insights into business and customer requirements for greater optimization and revenue growth.
Interconnection is the key to digital success
Equinix research summarized in the Leaders’ Guide to Digital Infrastructure shows that leading digital organizations have one thing in common: they’ve all built out digital infrastructure to interconnect the digital core, integrate with digital ecosystems and interact at the digital edge. Interconnection plays a key role in making this possible; it helps offset the performance impact of physical distance, which makes it easier for businesses to scale the distributed digital infrastructure they need across their global operations.
The Leaders’ Guide to Digital Infrastructure also found that interconnection can drive the following benefits:
- 6x increase in addressable market revenue through access to new markets
- 30% minimum reduced latency
- 60% faster infosecurity and IT audits
By partnering with Equinix to distribute your IT infrastructure at the digital edge and leverage our proximity to vital digital and business ecosystems, you can reduce costs and realize a greater ROI from interconnection. Equinix Fabric®, our software-defined interconnection solution, makes it quick and easy to set new connections to your ecosystem partners on demand. This could be the first step toward increasing application speed, performance and ultimately customer satisfaction. On average, we’ve seen customers save 60% on their networking costs, which frees up resources for them to invest in their hybrid multicloud infrastructure and digital transformation.
To learn more about how organizations are gaining competitive advantage with the help of interconnected digital infrastructure, read the Leaders’ Guide to Digital Infrastructure today.
 Paul Brodsky, “The Speed of Light Never Changes—Except When it Does,” TeleGeography, July 10, 2017.