How to Speak Like a Data Center Geek: Security and Reliability

 

175

 

The previous entry in our “How to Speak Like a Data Center Geek” blog feature was initially planned as the series finale. It turns out this was premature. Please consider this our comeback.

This comeback wasn’t prompted by a flood of angry emails to Equinix headquarters – we assume people were too upset to write. Instead, this guide to speaking like a data center geek survives because we data center geeks continue to use puzzling acronyms and confusing phrases. And like we said in our debut entry, it’s worth the time to inject some clarity into what we do. There’s also something to be said for riding a hit series a little longer.

In this entry, we’re picking things back up by focusing on the sometimes opaque terminology used by the people who keep our International Business Exchange™ (IBX®) data centers safe, secure and running.

N+1 (or greater) redundancy: This means that every component (N) critical to keeping our IBX data centers operating has at least one backup component (+1). The backups aren’t just about keeping servers running, they are also about keeping them cool. Equinix has a minimum N+1 redundancy standard for every piece of its ventilation and air conditioning equipment, and many of our IBX data centers offer N+2 redundancy for chillers and thermal energy storage. It’s a lot of backup for a lot of equipment, including 750-ton centrifugal chillers, condenser pumps and cooling towers. But when we guarantee cooling and power, we do mean it.

Uptime: Yes, it’s an obvious word mash for the times when our customers’ servers are up and running. But it’s also a critical industry metric of reliability, and our uptime leads the way.  Say, for example, that a data center provider boasts of a 99.9 percent uptime rate. It sounds good, but that amounts to a forehead-sweat-worthy 8 hours and 45 minutes of downtime per year. Equinix boasted a 99.999 uptime rate in 2012, which equaled about 5 minutes and 15 seconds of downtime, or 6.05 seconds per week. That’s barely enough time for a decent sneeze.

Lights-down mode: A term for the low-lights environment in the sections of IBX data centers where our customers’ servers are colocated. The idea, frankly, is to conceal these systems from prying eyes. Equinix also uses distinctive blue-colored lights in our IBX data centers because they generate the least heat in the visible light spectrum and help keep things cool. This choice can give our colocation areas a bit of nightclub vibe, though there are only scattered reports of dancing.

Overhead cable tray system: We think this patented system is superior to the “raised floor” cable management systems common to conventional data centers.  Raised flooring can cover unsightly cables, but we believe there’s no sense hiding anything. To us, the overhead trays make the cables easier to install and manage. And the overhead cable trays are more secure because all the cables are visible and monitored by a closed-circuit television system. It’s better TV than one might think, but not by much.

BMR: An acronym for Biometric Reader, which are devices placed at IBX data center access points to ensure that only the right people can get in. Biometric devices use physiological characteristics to identify and screen out individuals, and the ones at Equinix read the distinctive geometry of a person’s hand. They also determine whether the hand is live (let’s not dwell on the alternative), and if it has a temperature of at least 96 degrees Fahrenheit. Customers who want to visit their cages, where their servers and other equipment are kept, must be cleared through a BMR four times during the extensive, five-layer security process. Bottom line: No live hand, no service.

We’d be remiss if we didn’t add that data center geeks have a thing for interconnection, since it’s essential for the enterprise to compete. Download Equinix’s IOA Playbook, which describes an interconnection-first architecture that securely connects people, locations, clouds and data.

And check out every post in the “Speak Like a Data Center Geek” series. (Please note: We welcome binge readers):

Part 1: Introduction

Part 2: Power I

Part 3: Connections I

Part 4: Cloud

Part 5: Buildings

Part 6: The stuff we sell

Part 7: Security and reliability (see post above)

Part 8: Connections II

Part 9: Sustainability

Part 10: Networks

Part 11: Power II

Part 12: Internet of Things

Part 13: Big Data

Part 14: Virtualization

Part 15: Virtual Reality

Part 16: Software Containers

Part 17: Artificial Intelligence

 

Print Friendly


Related Content