The volume of data flowing into enterprise systems continues to grow exponentially, driven by emerging technologies like IoT, AI and 5G. This data represents both an opportunity and a challenge: since it can be used to support more informed decision-making, data has clear business value. However, enterprises also need a new approach to moving and storing all that data to make sure they aren’t completely overwhelmed.
With very large databases becoming increasingly common, enterprises are turning to cloud services to help meet their capacity needs. In the 2022 Equinix Global Tech Trends Survey (GTTS), 71% of IT leaders said they planned to move more functions to the cloud in the next 12 months. Of those, 65% said they’ll be migrating databases—more than any other cloud function.
To use cloud databases to their full potential, enterprises need a reliable, high-performance way to get their data to and from the cloud on both a one-time and an ongoing basis. Many of those enterprises are using private interconnection services like Equinix Fabric™ to meet that need. The GTTS found that 59% of IT leaders plan to increase their interconnection spending during 2022.
Measuring Performance with Equinix Fabric versus Public Internet
ESG’s technical review of Equinix Fabric shows it to be an easy-to-use solution that improves performance versus a public internet service.DOWNLOAD THE ANALYST REPORT
Interconnection provides a direct, dedicated route from origin to destination, while traffic crossing the internet must pass through an unpredictable array of public gateways along its journey. For this reason, the internet is simply too unreliable to use for data-heavy workloads like database backup and restore, particularly as those databases continue to grow larger.
At Equinix, we talk a lot about the performance advantages of interconnection over the public internet, which leads many customers to ask just how big the performance gap actually is. To answer that question, we conducted a benchmark collaborating with Oracle. We set out to measure the performance of Oracle Cloud Infrastructure (OCI) FastConnect via Equinix Fabric compared to the public internet. After rigorous testing, we now have detailed metrics showing the exact performance benefits of interconnection across a variety of network scenarios.
Benchmark methodology: Accounting for different network conditions
For the benchmark use case, the combined Equinix-Oracle team performed a simulated database backup from an Equinix Metal® instance hosted in an Equinix IBX® colocation facility to an OCI Object Storage service residing in an Oracle Cloud region and then restored it. We used a sample database size of about 1 TB. We looked at three different connectivity methods—the internet, FastConnect with jumbo frames (9000 MTU – Maximum Transmission Unit) and FastConnect with standard frames (1500 MTU)—with each option capped at 10 Gbps physical port speed. It is important to note that the setting of jumbo frames is not possible over the public internet.
We also tested three different packet delivery classes:
- High – 99.9% packet delivery rate (0.1% packet loss)
- Medium – 99.5% packet delivery rate (0.5% packet loss)
- Low – 99% packet delivery rate (1% packet loss)
Furthermore, we used a variety of representative latencies to simulate different distances between origin and destination. This ranged from an intracity data transfer (2 ms latency or less) all the way to an intercontinental data transfer (100 ms latency).
Benchmark results: Interconnection performance benefits vary based on conditions
The benchmark results confirmed that interconnection offers performance improvement up to 28x that of the public internet. The results also showed that the performance gap grows significantly as latency and packet loss increase. The graphic below shows what these performance improvements might look like across different latencies within the U.S.
As the graphic shows, an organization executing a cross-country restore between Northern Virginia and Silicon Valley (represented by latency of around 75 ms) would achieve up to 15x higher performance with interconnection even at .1% packet loss. In real terms, this means the average restore time for the sample 1 TB database was around 24 minutes, compared to well over 6 hours to move the same amount of data via the public internet.
At 100 ms latency, the observed performance improvement for interconnection topped out at 79.2x higher than the public internet at 1% packet loss. At .5% packet loss and .1% packet loss, the performance improvement multiples were as high as 49x and 15.6x, respectively.
Every organization has unique characteristics and requirements for their networking and public cloud use. Organizations should evaluate interconnection as a core part of their cloud strategy in consultation with technical advisors from Equinix and Oracle and other public cloud providers they work with. For those who want a more detailed review of the benchmarking, the IT analysis and strategy firm Enterprise Strategy Group (ESG) wrote a white paper reviewing the methodology and validating the results. In addition, detailed documentation on the testing process, scripts and results is available in our GitHub repo.
Key takeaways: Interconnection is best when paired with proximity and optimized systems
There are three key lessons from the benchmark that enterprises can apply to ensure they architect their network infrastructure for top performance.
Lesson 1: Cloud interconnection moves cloud data quicker
The first lesson is the one that shows up most clearly in the benchmark results: an Equinix Fabric cloud interconnection solution offers performance benefits over the public internet when bandwidth was maintained at 10 Gbps. The benchmark also identified multiple scenarios, including those in which the benefits of cloud interconnection would be especially pronounced.
Lesson 2: Network distance also matters
The benchmark results showed the clear link between latency and network performance. The performance benefits of interconnection may be especially pronounced at higher latencies, but that doesn’t mean enterprises don’t need to optimize for proximity. In fact, the benchmark found that just deploying in proximity to the cloud could drive performance improvements of up to 3x.
Equinix can help in this regard as well. With our global footprint of more than 240 Equinix IBX colocation data centers, customers can deploy in proximity to their chosen cloud regions. This is only possible because Equinix facilities are generally home to cloud on-ramps from top providers, helping customers establish low-latency cloud connectivity wherever they need it.
Lesson 3: System setup can further improve performance
There’s an almost endless number of intricacies around system setup that could impact interconnection performance. One example is the use of jumbo frames, which offer a higher payload: 9000 MTU compared to only 1500 MTU for standard frames. The benchmark found that moving six times more data per packet could provide an additional performance boost of about 2x. However, jumbo frames need to be enabled across the entire enterprise network, and not just at the point of cloud connectivity.
In addition, there are many network factors that were out of scope for this benchmark, but are still important to consider when planning for optimized performance. In the benchmark, we tuned the values for the number of parallel RMAN channels to 35. Also, we used NVMe storage disks in RAID 0 configuration for the server that hosted the Oracle database, thus ensuring the storage system can support the 10 Gbps throughput test. If factors including the aforementioned aren’t optimized for your specific environment, they could serve as bottlenecks that negatively impact performance.
Overall, when users pair Equinix cloud-adjacent colocation and Equinix Fabric cloud interconnection capabilities to achieve higher packet delivery rates and enable jumbo frames, it can deliver a compounded performance benefit of up to 28x.
Adopting Equinix Fabric is an excellent first step toward ensuring higher cloud performance, but it’s even better when paired with low latency and optimal system conditions. To learn more about the benchmark and the network performance lessons you can take away from it, read the ESG white paper today.