By Chris Sharp, Jon Lin, and Chris Hunsaker (part 1 of a 5-part series)
In today’s digital economy, performance can be a strategic differentiator for your company.
Whether you’re a bank handling millions of clients online, a retailer dependent on your website to drive sales, or a cloud computing company powering enterprises, performance-related end user experience is one of the key criteria on which your company will be judged.
This blog post is part 1 of a 5-part series on how to optimize application performance by reducing latency. The full whitepaper will be available for download in its entirety with part 5.
It will show you how leveraging Equinix as the foundation for your services can reduce latency by 15% globally, reduce downtime by up to 80%, and increase predictability, all without having to invest the time or incur the expense to change or redesign your application architecture.
There are numerous examples of how performance can impact revenue:
- Amazon – “Every 100ms delay costs 1% of sales” – for 2009 that translates into $245 million
- Mozilla shaved 2.2 seconds of load time off its landing pages and increased download conversions by 15.4%, translating into an additional 60 million downloads each year
- Microsoft found that an increase of 500ms of delays on its page loads resulted in losing 1.2% in revenue per user
- When Shopzilla reduced its page load time by 5 seconds, it saw an increase of 25% in page views and a 7-12% increase in revenue
- 10ms latency could result in 10% less revenue for U.S. brokerages
Performance isn’t just about the speed of a site; availability and consistency are also important. Being able to deliver consistent, reliable service is fundamental to customer conversion and retention.
From frustrated consumers trying to buy gifts for Christmas to multinational companies attempting to do computational modeling, all types of customers become frustrated when websites or cloud services aren’t fast or pages fail to load.
Being able to provide your customers a consistent experience, or in the case of the enterprise, to actually guarantee that performance and consistency with a Service Level Agreement (SLA), translates to increased revenue by improving the end user experience and reducing resistance from corporate buyers.
The speed of your site is judged on responsiveness to actions on the page (script requests, image renders, etc.) and on how quickly users can transition from page to page (loading a new page).
The elements of speed can be further broken down: the network latency and bandwidth between your end users and your site, the performance of your server infrastructure in responding to a request, and how quickly the user’s browser can render your site based on how the web page is coded.
While there is a tremendous body of knowledge on how to increase bandwidth, optimize servers and code web pages, network latency is generally considered immutable. But new studies show that if you reduce latency, it will have a tremendous effect on page load times, even more so than bandwidth, with every 20ms of reduced network latency resulting in a 7-15% decrease in page load times.
Conventional advice on reducing latency recommends using a third-party provider such as a Content-Delivery Network (CDN) to distribute content and leveraging their infrastructure to get geographically closer to the end user.
While a CDN can help accelerate static content and effectively distribute video, the increasingly dynamic nature of the web (social media, real-time API access, etc.) reduces CDN effectiveness, and in a real-time cloud application may not be able to help at all.
We’ll show you how it’s possible to reduce network latency across the internet by housing application infrastructure in Equinix data centers and leveraging the unique network density located in them.
Availability and Consistency
Availability of a website isn’t solely dependent on how well you operate your infrastructure; it is also affected by the performance of your internet service provider (ISP) and other service providers.
Disputes between carriers have sometimes resulted in a fragmented internet, with some end users unable to access content while others are unaffected.
Likewise, achieving consistent site performance is based on an amalgamation of internal factors (e.g., proper capacity planning, load management) and external factors (e.g., internet performance, local traffic).
Again, while the principles of optimizing the internal factors are well understood, how to optimize the external factors has seemed to be a secret kept by only the largest content providers such as Google or Amazon.
In future posts, we’ll describe how these large content companies increase their control over external forces, and how your company can reproduce this easily and cost effectively.