The Cloud Storage Performance Dilemma

Jim Poole
The Cloud Storage Performance Dilemma


The cloud is all about agility, scalability and cost savings, which is probably why enterprises are scoping it out to help them resolve their increasing data storage challenges. In fact, according to Research and Markets, the global cloud storage market is expected to grow from $18.87 billion in 2015 to $65.41 billion by 2020. However, many enterprises today are hesitant to dump their data assets into the public cloud, and there’s a good reason for that. No, we’re not talking about security this time. The reason is performance. Slow access to data once it is in a public cloud has also slowed down enterprise cloud storage adoption.

Some of the performance issues stem from the fact that many storage cloud providers are trying to achieve low cost and massive scalability through object storage. This is an architecture that’s not known for performing well when managing large data stores. To address this issue, companies like Amazon are increasing volume sizes and offering options that include the use of fast solid-state drives – a promising step up. However, physics and the inherent latency involved in connecting to storage over the Internet or other WAN connectivity solutions tends to diminish the performance advantage of these types of storage enhancements.

According to Scott Sinclair, an analyst with Enterprise Strategy Group, when solving for the speed of light you “need to rely on a specific innovation to solve the problem” – namely colocating data very close to compute resources, introducing some sort of network optimization or caching mechanism, or doing all of these things.

An Interconnection-led Approach to Improving Storage Performance

Many companies do not have the interconnection infrastructure to support direct and secure connections between data, users, applications, analytics and clouds. Legacy corporate networks tend to be wired to backhaul all traffic over WANs or the Internet to the corporate data center to get access to large data stores. As Scott Sinclair suggests, proximity between data and users, compute resources, network and storage optimization solutions, and clouds can be a step toward resolving some of these issues.

By taking an “interconnection-first” approach to cloud storage (private and public) and keeping it close to users, applications and analytics, you can harness the high-performance that proximate connections deliver. Many enterprises are leveraging hybrid cloud infrastructures to directly and securely interconnect sensitive date in private clouds with public cloud storage over high-speed, low-latency connections to increase performance, while reducing bandwidth costs.

For example, our customers that leverage an Interconnection Oriented Architecture™ on Platform Equinix can realize low-latency connectivity (<10 milliseconds on average) in most major metros worldwide and reduced bandwidth costs by up to 40%.

The Equinix Data Hub, places large data repositories close to users, compute, applications, analytics, and network and cloud providers. The Equinix Cloud Exchange enables you to bypass the Internet and interconnect your data directly and securely with multiple cloud-based application, compute and storage services over high-speed, low-latency virtualized connections via a single physical port.


Wherever you decide to place your storage – on-premises, in a colocation or in a private or public cloud – proximity is the key to crafting the best strategy for a high-performance storage infrastructure.

Download the Research and Markets report on the global cloud storage market.

Subscribe to the Equinix Blog