By Lee Tamassia (part 3 of a 4-part series)
As agencies continue to pursue data center consolidation and migration of key workloads and applications to the cloud, the choice of “where” to consolidate their infrastructure becomes critical. Why? While consolidation initiatives have largely been pursued in isolation, the Federal Data Center Consolidation Initiative (FDCCI) actually presents agencies with a unique opportunity to leverage cloud technologies in a much more cost-effective, secure, and efficient manner.
As the commercial data center market has grown, providers of data center facilities have largely split into two camps: data centers owned and operated by telecommunications (telco) providers and those operated by neutral third-party data center providers.
In the case of a telco-owned data center facility, network connectivity is generally provided exclusively by the telco that owns the facility. In addition to offering colocation services, many telcos also offer additional managed service product offerings out of these facilities.
In contrast, neutral data center facilities have largely evolved from “network neutrality” into a model of complete vendor neutrality where the data center operator creates an open marketplace within its facility for others to utilize.
Since vendor-neutral data center providers tend to focus exclusively on colocation services (i.e. space, power, interconnection), many third-party vendors, including managed service providers and cloud service providers, choose to deploy their infrastructure into these facilities, secure in the knowledge that their data center vendor is not and will not become a direct competitor. The neutral data center provider continues to offer colocation services within its facility while offering various options for direct interconnection between entities within the facility.
As a result, neutral data centers are often served by multiple network providers, allowing agencies to leverage the available diversity of network connectivity to build additional network redundancy into their data center deployment. Indeed, a handful of neutral data center facilities worldwide have become the primary aggregation and exchange points for the largest IP networks in the world and can offer interconnection to literally hundreds of network carriers.
As a result of the significant network density available in these key locations, a variety of businesses, including cloud providers, content companies, financial services organizations, global enterprises, and public sector agencies, have chosen to deploy their infrastructure within these facilities in order to leverage the direct interconnection options available to cross connect with multiple network providers.
This trend has resulted in the evolution of these facilities into “cloud hubs”: facilities with massive density of both network and cloud providers that represent ideal locations for infrastructure consolidation, network optimization, and direct interconnection to commercial cloud product offerings.
Cloud hubs provide agencies with a facility capable of accommodating a combination of deployment options, including collocated assets that remain under the direct control of agency personnel, private cloud infrastructure deployed and managed by the agency or on behalf of the agency by a technology partner or systems integrator (SI), and direct access via secure interconnection to public or community cloud infrastructure within the facility.
Cloud Hubs Improve Performance and End-User Experience
As a result of the significant density of network providers located within neutral cloud hubs, deploying within these facilities virtually assures a better end-user experience. Traditional delivery of applications can be unpredictable based upon the stability and utilization of the underlying networks they traverse. Having significant network choice within the data center allows agencies to connect to and utilize the appropriate networks necessary to optimize application performance.
While sources of latency include network hops, switch hops, and router hops, it is important to note that the vast majority (by some estimates 98% or greater) of latency across a wide-area network is purely a function of fiber miles. Therefore, distance is the most important consideration when trying to minimize latency. Application performance is subject to congestion, distance, and failure points. Therefore, a more direct network path will naturally result in better end-user performance for key applications.
Additionally, by deploying within a neutral facility with significant network density, agencies are able to directly cross connect to network providers within the facility, which:
- Eliminates the expense and provisioning time related to local loops
- Improves the reliability of connectivity to the network provider
- Improves the overall end-user experience by reducing packet loss and related TCP re-transmissions
Access to a distributed footprint of neutral data centers is equally important. Interconnecting to cloud infrastructure in a distributed manner allows agencies to reduce the overall distance and associated latency between applications and end-user communities. Further, by leveraging multiple geographically distributed data centers, agencies can develop better disaster recovery (DR) and continuity-of-operations (COOP) strategies.
To quantify these benefits, Equinix in collaboration with Compuware/Gomez, conducted a variety of tests to quantify the improvements in application performance enabled through multicarrier connectivity.
Through these tests, Equinix was able to simulate the effects of different application routing scenarios when connected to a single carrier and when connected to multiple carriers. The same underlying hardware, software, and physical locations were used for each test, and were conducted by utilizing a beacon server located at sites in Ashburn, Virginia, and Silicon Valley, California.
The initial test simulated connectivity using a single ISP in a single location and was compared to alternate scenarios that utilized connectivity with up to five carriers across multiple sites. Three key metrics were measured as part of each test, including roundtrip time, traceroute, and availability.
In each case, the roundtrip times, traceroute results, and availability metrics all improved significantly by leveraging a multi-carrier and multi-nodal deployment.
Neutral Data Centers and Private Clouds
By optimizing network performance, Federal agencies are well positioned to migrate to a strategy of cloud enablement.
One consideration is the deployment of an agency-wide enterprise private cloud. A private cloud involves the deployment of resources into a virtualized environment combined with the delivery and management wrapper that makes provisioning enterprise applications faster and more easily replicable across distributed locations, providing for an increased level of IT agility within the enterprise (the “utility compute” model).
When considering a private cloud deployment, agencies can follow a “build-your-own” approach, or they can choose to work with commercial providers focused on building and managing private cloud deployments on behalf of Federal agencies.
Many systems integrators that routinely conduct business with the Federal Government will build and manage private clouds on behalf of agencies or will partner with commercial providers, such as Rackspace, to deploy private clouds. Neutral data center facilities, in their role as “cloud hubs,” can provide tangible benefits to agencies pursuing a private cloud strategy.
By leveraging the density of cloud vendors operating within the walls of the neutral data center, agencies are able pick “best-of-breed” providers and aggressively solicit business between competing vendors as leverage to reduce costs. Additionally, as agencies migrate more workloads to their private cloud, they are able to leverage the density of network providers to quickly scale their bandwidth to meet demands.
The Public Cloud
The final piece in an agency cloud-enablement strategy is to identify select assets and applications that can be supported via public cloud and SaaS providers and migrated away from “on-premise” deployments. The public cloud model provides agencies with a foundation and roadmap to migrate select workloads to a public IaaS provider. Additionally, agencies can begin to reduce the number of “in-house” applications that they directly support by identifying opportunities to leverage public SaaS providers for similar functionality.
This strategy has been aggressively adopted, with success, by commercial “for-profit” businesses. As stated in the Federal Cloud Computing Strategy, “The private sector has taken advantage of these technologies to improve resource utilization, increase service responsiveness, and accrue meaningful benefits in efficiency, agility, and innovation. Similarly, for the Federal Government, cloud computing holds tremendous potential to deliver public value by increasing operational efficiency and responding faster to constituent needs.”
The Hybrid Cloud
Also likely are scenarios where agencies will continue to deploy and manage their own IT assets or private cloud deployment while leveraging the hybrid cloud model that allows them to smooth peak demand periods rather than over-provisioning their infrastructure to meet these peak periods. In fact, over-provisioning IT infrastructure to meet peak demand periods requires considerably more CAPEX investment in infrastructure, which then remains underutilized except during the demand spikes.
Integration of agency assets into a hybrid cloud deployment allows agencies to “own the base and rent the spike” and is ideal for those agencies that experience seasonal demand fluctuations. One example of such an agency is the Internal Revenue Service (IRS), which routinely experiences a spike in e-filing leading up to the tax filing deadline on April 15.
In their role as “cloud hubs,” neutral data center facilities can provide direct interconnection to a variety of public cloud IaaS, PaaS, and SaaS providers, such as Amazon Web Services (AWS). Interconnection to these public cloud providers via direct “cross connects” allows for higher bandwidth and greater security, and provides an attractive and often cheaper alternative to the standard access methods, which usually require provisioning of private circuits or utilizing VPN access over the public Internet.
Coming up in part 4: Conclusion and Key Takeaways