Technology continues to evolve in exponential fashion and, at no time in recorded history, has it been more a part of our daily lives than today. There’s perhaps no better example (or rationale) of this than the continued proliferation of cloud services across the globe. Transformative by design, cloud services introduce a major paradigm shift – the notion that both public and private sectors can relinquish ownership of costly and complex IT infrastructure and consume it from providers that are wholly dedicated to delivering it as a service.
Virtualization also drove the development of network function virtualization (NFV), which has become a key component in modern network topologies.
It has been universally accepted across every industry that the commoditization and outsourcing of network, security, applications and data – technology’s heaviest lift – are best delivered by those who manage its many moving parts. A sustained passion and technical acumen to continuously innovate infrastructure, applications, data, governance, cyber security and a plethora of highly technical tasks associated with each are necessary to make globally ubiquitous cloud services the next logical progression in our collective technological evolution.
A quick look back: Virtualization emerges as a critical scale agent as proprietary systems yield to open architecture
While mainframe technology represents the early beginnings of shared pools of compute and storage resources, the advent and proliferation of open architecture desktop computers (and servers) rendered proprietary and costly mainframes to a limited subset of highly specialized services. Broad availability of desktop architectures and a common operating system brought immediate returns as an affordable alternative for the masses but also came with a few unintended consequences. As large organizations began to accumulate many racks of these power and space intensive commoditized footprints, an undesirable scenario known widely as “server sprawl” brought diminishing returns from a cost and capacity perspective. Entirely new data center facilities were built to host them, introducing additional cost and complexity, as infrastructure needs continued to grow.
But as we’ve seen many times before with technology, advances such as hypervisors enabled us to borrow from our old mainframe playbook. Businesses could run many virtual servers across pooled resources of just a few, robust physical servers, similar to how logical partitions or “LPARS” were provisioned for multi-tenant mainframe environments in the past. Open source virtualization began with the Xen hypervisor, which was followed shortly thereafter by a comparable and commoditized hypervisor created by software giant VMWare. This new approach made it possible for organizations to throttle server sprawl by virtualizing dozens, hundreds or, in some cases, thousands of servers that could operate on a relative handful of clustered physical ones. Hypervisors provide a hardware abstraction layer between physical and virtual servers, effectively presenting pools of shared physical memory, CPU and other physical resources to the virtual server instances hosted above them. Virtualization also drove the development of network function virtualization (NFV), which has become a key component in modern network topologies. Virtualization continues to be a disruptive force – introducing a highly effective departure from high-touch individualized server and application stacks to dynamic provisioning and orchestration of shared resources.
Cloud services introduce a major paradigm shift – the notion that both public and private sectors can relinquish ownership of costly and complex IT infrastructure and consume it from providers that are wholly dedicated to delivering it as a service.
The emergence of cloud at the edge
At the same time, Equinix (Equitable – Neutral – Internet – Exchange) was growing rapidly as the industry’s premiere carrier-neutral colocation and interconnection company, effectively curating the equitable and extensible growth of what we now know as the internet. Network carriers and content providers continue to amass here. Doing so enables them to globally distribute their services through proximal adjacency with one another to privately exchange large volumes of traffic via peering exchanges operated by Equinix. The public internet was simply not a viable means of data exchange between these providers, and private peering alternatives in the secure confines of Equinix provided the ideal setting for exchanging and distributing ever-increasing volumes of traffic between carriers and service providers.
Capitalizing on the existence of thousands of on-net network carriers and globally distributed peering exchanges, Amazon Web Services (AWS) was first to leverage Equinix as an aggregation point for consuming their AWS Direct Connect service by placing high capacity edge nodes in Equinix that connected back to their physically proximal facilities. Many other hyperscale cloud providers were soon to follow and replicate this highly successful model such as Microsoft Azure ExpressRoute, Google Cloud Platform, IBM Cloud, ServiceNow, Salesforce, and many more that continue to change the cloud provider landscape.
Evolution of interconnection
Equinix, as the underlying platform where all of these services meet, introduced a number of innovative firsts to facilitate the efficient orchestration of network, cloud and SaaS services. Equinix Cloud Exchange Fabric™ (ECX Fabric™), a next-generation software-defined interconnection solution, connects 50+ Equinix markets (and respective cloud provider edge nodes) in Equinix locations across the globe. This enables globally extensible private connectivity between businesses and/or their desired cloud(s) and network service providers through this self-provisioned interconnection service. Consumed on demand via a secure customer portal, connectivity can be provisioned by businesses locally, inter-metro and/or globally from one Equinix metro to another within minutes – turned up or turned down as desired, much like corresponding interconnected cloud services.
Proximal adjacency at the edge – a pathway to effective hybrid multicloud
As a continued natural progression, early adopters of cloud services began by leveraging their existing internet service providers for customer premises-based cloud connectivity. Little known at the time by end customers, the vast majority of cloud edge or aggregation points were located at Equinix. As initial connectivity evolved to consumption, early premises-based cloud customers quickly began to realize the inherent limitations of doing so over long-distance internet connections constrained by cost and capacity.
At the same time, Equinix began a concerted advocacy of introducing a new proximity-based, regionally distributed architectural approach to solve for these and other challenges that continued to thwart broader cloud consumption. Recognizing that distance was a key cause for latency, it was clear that establishing proximal adjacency to shorten the distance between cloud consumers and cloud providers would be the resolution. As a result, enterprises began to establish an edge presence at geo-strategic Equinix locations to create optimal alignment between cloud services and their respective user communities. This helped to mitigate latency issues, while enabling the movement and consumption of far larger workloads in and out of cloud environments.
Many of these businesses that were in varied stages of a cloud transformation initiative, found that legacy applications and associated infrastructure required proximity to newly adopted cloud services to enable a more effective integration and, in many cases, transition from one to the other. For example, businesses that have traditional premises-based virtualized storage and high-performance compute environments continue to shift this infrastructure to Equinix, where a proximal high-speed, low-latency intersection between network, cloud services and legacy systems can be achieved.
A cloud adjacent strategy like this enables a more secure, higher performing marriage of legacy systems that cannot, or should not, move to cloud to be situated next to one or more clouds for tighter integration. Doing so in geo-strategic locations also assures better performance of newly adopted cloud services by effectively aligning and distributing those services to corresponding user communities. In many cases, this approach provides additional benefits such as shrinking the network attack surface and eliminating inordinate numbers of transit gateways and security enclaves between systems that also attribute to degraded performance.
Cloud adjacent deployment on Platform Equinix
Whether organizations are transitioning legacy systems or have already deployed a hybrid or multicloud infrastructure, a cloud adjacent strategy can help reduce cost and complexity while enhancing security, reliability and performance as new mission requirements and/or use cases evolve.
Read the white paper, “Hybrid IT: Why Cloud Adjacency Matters,” to learn more.