It boggles the mind to think of the lengths and expense that organizations undergo to reduce application latency. Survey results from Turbonomic reveal that approaches to solving latency run the gamut, from system and application tactics to network optimization fixes.
Unfortunately these solutions tend to be “one-trick ponies” that either don’t directly address the root cause of most latency issues (distance) or focus on some aspect of system, application or local network latency with diminishing returns. [i] The most direct approach to solving the impact of latency on network speed and system and application performance is simply removing the distance between your users, applications and partners. By distributing latency-sensitive application workloads at the edge, closer to the people, clouds, data, things and business ecosystems that interact with them the most, you’ll get the laws of physics working in your favor.
Removing the distance with private interconnection at the edge
As user and application requirements for high-volume workload and data exchanges increase, tolerance for latency decreases. We’ve seen companies moving application workloads to the cloud, only to find that they haven’t really solved their latency problems. The problem is that shifting applications to the cloud does not remove the underlying issue, which is the distance between cloud services, users and value chain partners. However, if you place IT traffic exchange points in strategic metro locations that are proximate to employees, customers and third-party business partners, you can significantly reduce the latency between your users and cloud apps and services. Removing the distance also reduces latency between interconnected digital consumers (enterprises) and producers (service providers) in dense ecosystems of network, cloud, SaaS and supply chain partners. And as high-frequency trading, digital content and online advertising industries have demonstrated, the performance needed to compete in your given industry can be achieved by placing your applications in colocation data centers that privately connect users, supply chains and workloads within the same facility or campus, providing the shortest distance to the largest number of counterparties.
Increasingly we see enterprises bypassing the internet and privately interconnecting to network providers to help optimize their networks for digital business. According to the Global Interconnection Index (the GXI) Volume 2, an annual study published by Equinix,[ii] network providers make up the largest percentage (66%) of the partners to which enterprises want to directly and securely interconnect. And cloud and IT providers are the fastest growing ecosystem in installed Interconnection Bandwidth capacity, based on a compound annual growth rate (CAGR) of 98% between 2017 – 2021.
Interconnection Bandwidth (Tbps) by Counterparty
Interconnection Bandwidth is the total capacity provisioned to privately and directly exchange traffic with a diverse set of counterparties and providers at distributed IT exchange points inside carrier-neutral colocation data centers.
Proximity + edge = lower latency and costs
Let’s look at an Equinix customer in the insurance company that achieved lower latency by pursuing a distributed edge strategy. The company shifted its IT infrastructure from a centralized core architecture to a distributed, metro edge architecture. By leveraging Interconnection Oriented Architecture™ (IOA™) best practices on Platform Equinix®, the company was able to directly and securely connect users to local clouds via private interconnection hubs, which are in turn interconnected to create a “fabric” between regional metros.
By converting its IT infrastructure from centralized to distributed, the company lowered latency, reduced costs and optimized bandwidth per employee:
- 40% lower roundtrip latency.
- 60% cost savings in network optimization and cloud connectivity, resulting in an annual savings of $6.7 million.
- An increase from 5% recommended bandwidth/per employee to >150% bandwidth per employee.
These game-changing results demonstrate the significant return on investment (ROI) distributing your IT architecture on a global interconnection and colocation platform like Equinix can deliver. It easily outweighs the overhead of managing multiple IT services or application instances in different locations and sets you up to achieve maximum digital business advantage.
Take 5 steps toward a more distributed IT architecture
Leveraging an IOA strategy on Platform Equinix is the most effective path you can take toward a more flexible, high-performing and scalable distributed IT architecture. The following five steps, based on the IOA Network Blueprint, will enable you to optimize your network for lower application/workload latency while harnessing local digital and business ecosystem connectivity. As illustrated below, shifting from constrained, point-to-point connectivity to optimized multipoint interconnection via direct, private IT traffic exchange points between users and local services eliminates the issues presented by distance.
Before Interconnection After Interconnection
Step 1: Localize and Optimize the Network
Establish one or more interconnection hubs (“edge nodes”) at a metro edge, closer to users, partners and customers, and to where business is conducted.
Step 2: Segment Traffic Flows
Prepare for multicloud and partner network integration as well as digital service flow isolation.
Step 3: Establish Multicloud Connectivity
Integrate cloud services value chains, applications and data across local, cross-connected cloud providers, accessing SaaS services as needed.
Step 4: Offload the Internet at the Edge
Bring all traffic into the interconnection hub(s) and benefit from added control and reduced risk, while routing public internet traffic directly with internet peering.
Step 5: Connect to Ecosystems
Cross connect to business partners and ecosystems for digital commerce and/or data exchanges.
These five steps not only optimize your network for lower latency but also provide additional benefits such as greater application performance, security and scale.
Rather than continuing to throw money at data center infrastructure fixes with diminishing returns, solve application latency by architecting for the edge. Learn more about how to harness the power of private interconnection at the edge by reading the Global Interconnection Index Volume 2.
You also may be interested in reading: