Data Protection Architecture for the Hybrid Multicloud World

How companies can bolster data protection, retain sovereignty over data, and elevate performance across any private or public cloud

Glenn Dekhayser
Data Protection Architecture for the Hybrid Multicloud World

In the IT world, the word “backup” usually creates a sudden urge to leave the room. While data protection should be considered the most important responsibility of IT teams, it often comes as an afterthought when discussing enterprise architecture. At a time when monolithic data centers were the norm, the impact of this architectural dynamic was relatively minimal. Since you at least knew where all your applications and data were located, data protection problems were limited to a single realm without the possibility of scaling.

Today, hybrid multicloud is the reality for over 93% of large enterprises, 89% of which expect to maintain an on-premises footprint for at least three years. Therefore, data protection problems are no longer as simple as they once were. Data gravity clusters reside in multiple clouds and regions, creating the need for backup repositories in physical proximity to these many locations to satisfy recovery-time objectives (RTO). This can lead to redundant, unnecessarily expensive data protection architectures that end up being built by default, rather than by design.

Engage In-Depth Defense for the Digital Edge

Gain visibility into unsanctioned cloud consumption and apply real-time policy controls with existing security tools against cloud services.

Download Now
Server room colocation or colo with several cabinets, server, switches and gateways.

Public cloud providers self-servingly encourage their enterprise clients to create protection domains within that same cloud. This is great for the cloud provider but a terrible idea for an enterprise, from both data protection and data gravity perspectives. This “encouragement” comes in the form of data egress charges that penalize their customers for maintaining a copy of cloud-based data outside the control of that cloud provider.

This has not gone unnoticed by the industry. For example, one highly respected VC firm advises, “Make sure your system architects are aware of the potential for [data] repatriation early on, because by the time cloud costs start to catch up to or even outpace revenue growth, it’s too late.”[1] After some time, the enterprise will cross an inflection point where it’s faced with painful decisions when new digital opportunities arise, leading to suboptimal choices.

Egress charges should never get in the way of a business decision. If you find a new application or service—whether in another cloud provider or via your own private cloud—that creates new and important value derived from your data, your architecture must support the most pain-free projection of that data to this new service. You must do all this while continuing to apply appropriate data protection and retention policies regardless of location.

Gartner® states in their Top Strategic Technology Trends for 2022 that “A data fabric supports the design, deployment and use of integrated and reusable data objects, regardless of deployment platform and architectural approach. It does this by using continuous analytics over existing, discoverable and inferenced metadata assets to identify where and how data is being used.”[2] Implicit in that effort is the ability to move data seamlessly throughout an organization in the most cost-effective manner possible.

Address gaps in legacy data protection architecture

Given that most businesses run hybrid architectures these days, data protection of assets located within the walls of an enterprise is as important to consider as cloud-based data. Since the cloud regions chosen by an enterprise would normally be as close as possible to its users or existing data center resources, regionally located backup targets could serve as primary backup targets for on-premises resources as well, which satisfies off-site requirements. This architectural feature has even more value when multiple locations exist within a region, allowing for a single backup target to protect multiple locations.

In Figure 1, we can see that an existing data protection architecture, meant to protect an on-premises data center and multiple remote sites, has been extended to include the most heavily used public cloud providers. The HQ data center and remote sites backup to local devices, which then replicate to the public cloud for long-term retention, creating enormous data gravity that will prevent the enterprise from ever changing. In addition, the enterprise will have its only copy of long-term retained data trapped within a third-party provider. One of the two most-used clouds must backup to the other, suffering from full internet-based egress charges.

Also illustrated here is the inflexibility of interconnectivity. The remote sites must traverse the MPLS network operated by a network service provider (NSP) to securely access the remote public cloud regions. There is no ability to dynamically allocate bandwidth when needed, nor connect to new clouds or services over private interconnectivity. Other cloud services to the left must be connected over the public Internet, as many NSPs won’t directly connect to these in every region.

Lastly, consider the redundancy of hardware: each site has its own storage, which must be installed, configured, supported and refreshed when its lifecycle ends.

Figure 1. Legacy Data Protection Architecture in a Hybrid Multicloud Setting

To resolve the gaps discussed to this point, enterprises must look past individual infrastructure providers and create a proper scaffold upon which they can execute their digital transformation strategies. This hybrid multicloud data protection and projection architecture, at a minimum, should possess the following characteristics:

  1. Multicloud-adjacent target storage located in all of the regions the enterprise does business.
  2. Client-side deduplication, which reduces the size of backup data being sent to these regional targets from inside the cloud, and in turn reduces egress charges by up to 95% over time.
  3. Agile, private connectivity to and from multicloud, which helps quickly and programmatically connect to any of the regionally located cloud providers over private links that reduce the per/GB egress costs by as much as 80% versus using the public internet.

Figure 2. Hybrid Multicloud Data Protection and Projection Architecture

Additional characteristics bring data management and security advantages to larger and global enterprises:

  1. Larger-capacity secondary target storage located in each operational region: The protected cloud-adjacent data in each metro is optimized for rapid restore and workload mobility, and represents a single protected copy. Secondary protected copies of data from all metros in a region can be replicated to a centralized aggregation and analysis point, where data can be examined for ransomware-related data drift.
  2. Sovereign object storage for archiving: It would be architecturally inconsistent to protect cloud data at the sovereign edge, only to archive data back into the cloud, re-creating the data gravity and inflexibility you just avoided. For long-term retention requirements, object-based storage (with a lower cost per TB) should be used as a tier of the larger aggregation point mentioned above. This can be a single deployment of object storage replicated to glacial cloud (keeping one instance of the data sovereign), or a geographically erasure-coded deployment spanning multiple physical sites for resiliency.
  3. API-driven private interconnectivity: This deserves further mention here, as the use case is slightly different. Because these links between metros and a centralized repository can be established and torn down at will, a true “air gap” can be created independent of the data storage platforms chosen. This is imperative for cyber-recovery.
  4. API-driven bare metal: Given today’s supply chain issues that show no near-term signs of resolution, the ability to deploy single-tenant and sovereign cloud-adjacent storage into global metros becomes a must-have. Enterprises cannot wait months to receive, ship, install and configure physical gear for such an important task as data protection. Deploying to equipment that is already in place, at software speed, makes more sense than attempting to procure and deliver technology around the world.

There are multiple clear benefits to enterprise adoption of this method:

  • Eliminate data protection infrastructure redundancy: One backup target protects all of a region’s cloud, colocation and on-premises data, improving operational and cost efficiency.
  • Restore performance when and where you need it: Protected data is resident on purpose-built, single-tenant storage at the adjacent regional edge, with low latency and API-driven connectivity to all regional points, creating a true hybrid multicloud reality.
  • Project to any cloud, public or private: Test and migrate your workloads to the right cloud, with the right feature, performance and cost profile.
  • Establishes full sovereignty over enterprise data: Prevent data gravity from falling under third-party control and reducing future operational choices.
  • Creates regional data pivot points: Create almost unlimited optionality to quickly project your data to any service provider or business partner and seize new opportunities.
  • Compatible with today’s operations: These benefits are easily achievable with technologies currently available from multiple storage providers and are in use in enterprise environments today.

Choose partners that can support requirements for today and in the future

To succeed with this architecture, enterprises must partner with data center and technology companies that can provide regional optionality and operational agility, not only to support today’s requirements, but to allow the business to strike at any new opportunity it sees fit.

Equinix fits this description:

  • More than 240 Equinix International Business Exchange™ (IBX®) data centers in 65+ global metro markets, including low-latency connectivity (within 1-2 ms) to major cloud service providers (e.g., AWS, Microsoft Azure, Google Cloud, Oracle Cloud and IBM Cloud), allowing you to create your ultimate global data fabric.
  • Our IBXs sport the greatest NSP density in the world, providing for the most architectural choices and optimal flexibility.
  • Equinix Metal®, our automated Bare Metal as a Service offering, is located in 18 metros (with 7 more planned for this year), with instances specifically designed to support backup target workloads.
  • Many storage providers (both hardware-based and software-defined storage) leverage Equinix as their platform of choice to deliver their Storage as a Service solutions.
  • Equinix Fabric™, our agile software-defined interconnection service rapidly provides connectivity with thousands of customers, partners, and top-tier networking, storage, compute and application service providers worldwide, allowing an enterprise to create the hybrid multicloud outcomes required today and in the future.

Learn more about how you can build a hybrid multicloud data protection and projection architecture with Equinix Metal and Equinix Fabric.



[1] Future from A16Z, “The Cost of Cloud, a Trillion Dollar Paradox,” by Sarah Wang and Martin Casado, May 27, 2021.

[2] Gartner “Top Strategic Technology Trends for 2022,” Published October 18, 2021. By David Groombridge, Frances Karamouzis and Monika Sinha.


GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.


Glenn Dekhayser Global Principal, Global Solutions Architects
Subscribe to the Equinix Blog