Can Colocation and Interconnection Improve Your Network Performance?

Gene McColm
Can Colocation and Interconnection Improve Your Network Performance?

As cloud adoption increases and becomes the catalyst for many companies’ digital transformation, IT organizations see the network performance between their on-premises data center infrastructures and cloud provider(s) as a critical success factor. Using shared public internet connections can introduce latency and risk that can severely impact application performance. These days that could be a lot of connections and complexity given that organizations leverage almost five clouds on average.[i] So the question is, can digital businesses realize performance gains when accessing clouds using a colocation data center and interconnection platform, such as Platform Equinix®, versus an on-premises data center connecting to the cloud via the public internet.

Report: Hands-on with Hybrid Cloud Shows Optimal Architecture for Application Performance

Independent Report: Hybrid cloud through Equinix-based Performance Hub and AWS Direct Connect dramatically improved application performance

Download
multi-clouds

Equinix and Principled Technologies (PT), a leading technology marketing, testing and learning services firm, teamed up to get some answers. PT conducted three different Amazon Web Services™ (AWS EC2) interconnection scenarios at the Principled Technologies data center and an Equinix Solution Validation Center™ (SVC) to see if there was a significant difference in data transfer performance when using the public internet versus private interconnection between businesses and cloud partners. The results were pretty amazing (see image to the right).

The test bed

The Principled Technologies engineers first constructed a test bed from a technology stack consisting of privately owned resources that included virtualized compute, NetApp® storage and F5 BIG-IP® networking resources. To simulate hybrid cloud scenarios, the environment was set up to run a distributed e-commerce application with the back-end database server residing at the Principled Technologies data center and order-entry clients residing in the AWS public cloud.

For the public internet testing, the test bed was located within the Principled Technologies data center in Durham, NC, and connected to the AWS public cloud in two test configurations: via the public internet and via a high-speed dedicated connection through a local network service provider (NSP).

For the private, dedicated interconnection testing, PT moved the test bed to an SVC inside an Equinix International Business Exchange™ (IBX®) colocation data center with the assistance of the Equinix Global Solutions Architect team. Equinix IBX data centers provide high-speed, low-latency direct connections for secure, public and hybrid cloud interconnection with resources like AWS.

By moving the test bed to Equinix, PT effectively established a Performance Hub® for the IT infrastructure, which allows its application to be located within the Equinix data center. Doing this enabled PT to take advantage of private access to AWS through AWS Direct Connect. Establishing Performance Hub locations in well-connected data centers, such as Equinix, helps businesses efficiently deploy IT resources at the edge, close to end users, and dramatically optimize global network performance.

Scenario #1: ISP Shared Public Connection

For the first two hybrid-cloud connectivity scenarios, PT set up a web application leveraging the AWS public cloud for distributed order entry where the traffic source was client systems. This application housed the data in an on-premises database server, backed by NetApp storage in Principled Technologies’ data center, while AWS housed the order entry traffic production clients. The data was protected by placing the database server behind an F5 BIG-IP 4000 gateway appliance, which limited the types of traffic that could access the database server.

In the first testing scenario, the clients accessed the data stored at Principled Technologies’ data center via a web application hosted in the AWS cloud and connected to the public internet through an internet service provider (ISP). Processing orders and transferring large files on this public internet fiber connection meant that prioritized workflows traveled with everything else on the public internet, so the route or detours the information took before reaching its destination were not known.

Scenario #2: NSP to Equinix to AWS Direct Connect

In the second testing scenario, PT kept the database server, private virtualized compute, backing NetApp storage and F5 BIG-IP 4000 appliance on-premises. It then switched to an NSP with a private, dedicated fiber Ethernet connection in the Equinix SVC using the Equinix Performance Hub and AWS Direct Connect for interconnection to AWS. They also used the F5 BIG-IP to establish an AWS-required border gateway protocol (BGP) session so they could route packets from the AWS private network to the private network at Principled Technologies over a dedicated private connection. This allowed them to split the prioritized, private workload traffic apart from the public “everyday” business traffic. So activities that involved employees streaming media, clients or coworkers uploading or downloading large files, and normal business usage didn’t affect the response times because they bypassed the crowded public internet connection.

Scenario #3: Equinix to AWS Direct Connect

In the third testing scenario, PT moved the private components of its hybrid cloud infrastructure to the Equinix IBX data center SVC and interconnected to the AWS infrastructure via the Equinix Performance Hub and AWS Direct Connect. It used the F5 BIG-IP to establish both the BGP session to AWS and as an intermediary device that connected to the Principled Technologies remote, on-premises stack via the dedicated connection.

The results are in

Principled Technologies found performance improvements in three areas by moving its test IT infrastructure to an Equinix IBX colocation data center with dedicated interconnection, bypassing public internet connectivity:

  • Up to 48% greater order processing potential – increasing the number and transaction speed of requests
  • Up to 41% decreased application wait times
  • Up to 96% lower network-related wait times

Dedicated interconnection enabled by establishing a Performance Hub at Equinix with access to AWS Direct Connect offered a more stable, predictable, lower-latency and higher-throughput option for connecting to AWS than the public ISP configuration. It also enabled better transactional database performance than both the ISP and dedicated NSP connections.

Establishing strategic IT locations via Performance Hub sites on a globally interconnected platform such as Equinix allows you to extend your data center to the digital edge, where commerce, population centers and digital ecosystems meet. By locally accessing your network and cloud service providers and data under one roof using high-speed, secure and reliable interconnection, you can significantly improve cloud performance for a variety of applications and use cases, such as data replication and backup, application failover, and disaster recovery.

According to Principled Technologies, “As your company works hard to keep ahead of both demand and the competition, long network wait times can kill any forward momentum your business is making. Consider re-evaluating your approach to hybrid-cloud applications and look closely at interconnection at the cloud edge. It’s time to think outside your on-premises data center.”

Read the Principled Technologies report on Equinix Performance Hub and AWS Performance.

[i] RightScale, “2018 State of the Cloud Report.”

 

Subscribe to the Equinix Blog