Welcome to the second of a two-part post (read part 1) that was prompted by our friends at CenturyLink, who posed six important considerations when choosing a colocation provider. I addressed three of those considerations from an Equinix perspective in my previous post, which you can read here. Here’s my take on the remaining three considerations:
How does my colocation provider ensure security and compliance?
These are two separate but related topics. Being compliant with regulations is important, but it doesn’t necessarily mean you’re secure, and being secure doesn’t necessarily make you compliant. Furthermore, different certifications are of importance to different industries.
Equinix has got you covered for the basic certifications, but if you have specific requirements, then we can work with you to ensure you meet them. For example, we could build floor-to-ceiling cages, install opaque screens, fit custom access control equipment (swipe cards or biometric scanners), or ensure complete coverage by closed circuit television. Some of the certifications that we obtain as a matter of course include:
One of the main advantages of deploying on Platform Equinix is that wherever you are, you can expect a consistent, quality-controlled and certified service. For global businesses (and let’s face it, outside of your corner store, which businesses aren’t global these days?), this greatly simplifies operations and allows them to focus on their core competencies.
What are the power and cooling capabilities of the center that will house my equipment?
This is an important question, in that most enterprise data centers are built to cater for power and cooling loads of around 4kW. This made sense when we had underutilized, dedicated physical servers. But now we have highly efficient virtualized workloads. We’re seeing single cabinets run well into the teens (most Equinix facilities can handle up to around 18kW before we have to start building custom solutions). This is made possible by the underlying hardware, as described in a post on the ipSpace blog:
“The last generations of high-end servers are amazing: they can have terabyte (or more) of RAM, dozens of CPU cores, and four (or more) 10GE uplinks. It’s easy to pack 100+ well-behaved VMs on a single server, reducing the whole data center into a private cloud that fits into a single rack.”
Indeed, most cloud and big data applications are fundamentally incompatible with existing enterprise data centers, unless you leave your racks mostly empty. Sure, there are workarounds for exceeding the design density of a facility, but there are also other benefits in moving to a high-density facility; they generally come with access to an ecosystem of cloud and network service providers.
What are the connectivity options between my site and the colocation data center?
This one is directly related to the number of carriers present in the data center, and with around 1,000 carriers across Platform Equinix, you can expect to find dozens, if not hundreds, in your local facility. This is thanks to our carrier neutrality. In fact, the carriers themselves come to Equinix to connect to each other (which was actually how the business was started in 1998).
We also connect most data centers together within a metro (e.g. London) with a range of Metro Connect services. Connecting metros together (e.g. London to Frankfurt) is just a case of looking in the Web-based Equinix Marketplace for a carrier that’s present in both locations. That makes it easy – and surprisingly cheap – to connect between facilities.
As for connecting from your buildings back to our data centers, many carriers do offer “last mile” connectivity, and some of them have even provided us their “lit building” lists and connected to the Equinix Ethernet Exchange. Millions of buildings are connected directly to the exchange, and many, many more to the data centers themselves. With most national carriers present in our facilities, chances are the building you’re in now is connected.