Going back in time by traveling faster than the speed of light…developing telekinetic abilities a la Roald Dahl’s Matilda or by using “The Force”…and the promise of an efficient hybrid multicloud architecture. What do all these things have in common? The answer: They’re all nothing more than science fiction.
Seriously though, despite how often the words have been uttered in meeting rooms or written in technical articles or analyst studies, digital leaders are realizing that “hybrid multicloud” as it’s been sold to us simply isn’t working. It may sound great on paper, but actually manifesting it in a way that delivers strategic and operational benefits has proven to be quite difficult.
One reason for this is that moving data and workloads between clouds, and deciding which workloads and data should go where, is a fairly complex task. Frankly, the answer is always changing. Moreover, customers use a wide array of different applications, many of which they consume “as a service.” This limits their practical ability to engineer flexibility into the solution.
The promise of hybrid multicloud is all about maximizing flexibility—making it not just possible to choose the right cloud for the right job, but easy or even automatic. However, if it’s difficult and expensive to move data and workloads between clouds, then most enterprises simply won’t, which means that even the foundation for hybrid multicloud will remain out of reach.
To help enterprises achieve the true promise of hybrid multicloud, forward-looking industry leaders have proposed and begun discussing the concept of “supercloud.” This conceptual framework would reduce complexity and increase flexibility in multicloud environments by treating ALL possible landing spots for cloud workloads as one holistic workspace, and then optimizing each application and workload accordingly.
But for this idea of a supercloud to become reality, someone would have to design and build an abstraction service that would make a “write once, deploy to any” paradigm possible. This service would then automatically reengineer and efficiently redeploy workloads to meet the requirements of the target cloud and ensure the right infrastructure and data sets are in place to support workloads after they move.
What separates supercloud from today’s hybrid multicloud?
When we speak of multicloud today, we’re usually referring to the relatively uninspired idea that different parts of the business may choose different clouds based on individual preference, or that a business may deploy the same solution to more than one cloud to meet availability or resiliency requirements. However, to get the maximum benefit from hybrid multicloud architectures, enterprises would ideally replace this piecemeal approach with one based on a holistic strategy. This means being able to move individual applications between clouds quickly and efficiently—or even better, designing an application to automatically distribute workloads across clouds to optimize performance, costs or other factors.
We all know the major hyperscalers have released powerful services that help customers execute their business strategies with greater performance and flexibility than ever before. Yet despite the advancements and advantages of any one cloud provider, no one cloud is right for every need, no one cloud is wholly insulated from the potential for mass outages or disruption, and no single cloud is likely to remain the same, with new features, capabilities and pricing models being introduced all the time. By adopting supercloud-enabled applications, enterprises would be able to avoid vendor lock-in, pick best-of-breed services from across clouds to address their business needs and adapt accordingly as those needs change over time.
A supercloud implementation would increase flexibility
In our previous blog post on the topic of dedicated cloud, we discussed the drawbacks of building a single large data estate in one cloud: Data gravity combined with high egress fees would force organizations to use only that cloud to drive associated workloads. Hyperscale cloud vendors know what they’re doing when they make it cheap and easy to move data onto their cloud, but difficult and expensive to move it off.
In contrast, this conceptualization of supercloud would allow customers to simply retarget or rebalance their workload deployment to a competitive cloud provider with no additional engineering work required. Where they currently face the difficult challenge of having to rebuild and rearchitect their entire solution before they can move it, supercloud would help customers regain a measure of power against the hyperscale behemoths.
Supercloud would support greater resiliency
If you’re dependent on a single cloud provider for your entire IT infrastructure, then you’re vulnerable to any outages that cloud may experience. The only way to ensure cloud resiliency is by working with more than one provider. Once again, this is easier said than done. There’s currently no simple method for failing over from one cloud to another in the event of an outage.
Today’s enterprises often have to get creative to design a resilient cloud infrastructure. Some have even gone as far as running duplicate active-active instances on two different clouds—essentially paying double just to make sure their fallback environment would be available if they needed it. This solution is far from ideal, but some view it as their only option to support mission-critical business processes.
With supercloud, enterprises can get the simple failover they need to enable true multicloud resiliency—and they won’t have to pay double to do it.
How service providers can help build supercloud
The promise of supercloud is clear: Making workloads move across clouds easily would give enterprises the flexibility to ensure they’re always meeting their top business priorities.
The question of how to create this supercloud is less obvious. The reason we’re only talking about supercloud and not enjoying its benefits is because someone—or more likely many someones—still has to build that abstraction layer that will make it possible. Who’s going to build it, and how, remain open questions.
However, there are a couple things we know for sure. First, it’s unlikely enterprises would be able to dedicate the time and resources needed to build their own supercloud abstraction layer for all their applications. It’s also highly unlikely that any of the cloud providers would do it themselves (they are directly incentivized to avoid it unless any one of them begins to wholly dominate the space in an anti-competitive way). A third-party managed service provider could build the abstraction layer as a standalone offering, but so far, none of them have announced plans to do so. The task probably falls outside the limitations of their available resources anyways.
In our view, the most likely group of candidates to lead the way towards a practical manifestation of supercloud-like capabilities is software solution providers. Just like their enterprise customers, they have a vested interest in avoiding lock-in to a particular cloud vendor, and a completely cloud-agnostic and cloud-flexible capability would be a strong selling point when they engage large enterprise customers.
The problem of how to do it would also be a lot easier for service providers to solve, given that they would be able to directly develop those mechanisms into their products, and that the total scope of work would be limited to just how their product operates. For example, companies like Snowflake or Databricks would only have to worry about architecting and optimizing their solutions to effectively leverage multiple clouds in this completely seamless way.
Supercloud requires a neutral cloud environment
We know the demand for supercloud exists: Today’s enterprises are thirsting for a more practical and effective way to deploy true hybrid multicloud solutions, and forward-thinking solution providers are now in a position to deliver. By investing the time and resources needed to build supercloud-ready offerings, solution providers can give their customers the unique and powerful benefit of no longer needing to worry which cloud or clouds are right for each of their workloads. We believe this will be an extremely powerful competitive differentiator for those solution providers that lean in, setting them up to thrive in this next era of “the cloud.”
So why haven’t solution providers helped build supercloud yet? Well, we’d posit they’ve been missing a key architectural element required to execute such a strategy: a neutral, interconnected and feature-rich “cloud” environment to serve as the proverbial “cloud Switzerland.” This neutral cloud would play the role of intermediary, offering stable, low-cost data transit (including egress) and giving solution builders a place to house their applications’ authoritative data stores. From these data stores, they can push data and workloads to any of the directly adjacent cloud providers as needed.
Solution providers may not realize it, but the neutral, interconnected, cloudy digital infrastructure to support supercloud already exists. Dedicated cloud on Platform Equinix®—built around Equinix Metal® for single-tenant bare metal and Equinix Fabric® for software-defined multicloud networking—makes it easy for solution providers to get the neutral “staging ground” they need to move data and workloads flexibly across clouds. The broad global reach of Platform Equinix can also help solution providers connect to all major clouds and get closer to end users at the edge, wherever that may be.
Supercloud fits in well with a cloud-adjacent infrastructure strategy, which is a key part of the Equinix approach to hybrid multicloud. By placing data close to all major clouds, but not in any one particular cloud, our customers can tap into different cloud services on demand while also keeping costs and latency low. For a closer look, read the guide to cloud-adjacent data and storage.