In our How to Speak Like a Data Center Geek series, we aim to answer all the questions you’ve always wanted to ask about data centers—as well as the ones you never even thought to ask. In this edition, we’ll be tackling the critical issue of data center cooling.
As servers run in data centers, they inevitably create heat, and they must be cooled in order to continue functioning properly. It’s a straightforward issue of thermodynamics, but it’s an issue that data center operators are obsessed with solving in the most efficient way possible. We do this for two simple reasons: to avoid wasting energy and to keep costs low.
Now, let’s learn some of the terms you need to know in order to understand data center cooling better. (Keep in mind that this is just a high-level look at an extremely complex topic. If you’re looking to get into the weeds, we’ve provided links throughout the post where you can click to learn more.)
Air cooling: The traditional method used to regulate server temperature inside data centers. It functions based on the same principles as the air-conditioning system in your home, using fans to pass chilled air over the components that need to be cooled.
Liquid cooling: A method that uses cooling fluid to move heat away from servers. While most data centers still use air cooling, liquid cooling deployment is on the rise.
Because liquid transfers heat much more efficiently than air, liquid cooling can help data center operators pack more compute capacity into the same physical footprint. It also helps improve server efficiency by reducing the amount of energy used to run server fans.
Building cooling system: In addition to the server cooling methods summarized above, data centers are also equipped with systems that remove heat from the facility altogether. The two types of building cooling systems are:
- Air cooling systems, which reject hot air from the facility.
- Evaporative cooling systems, which reject water vapor from the facility.
Compared to air cooling systems, evaporative cooling systems can reject comparable amounts of heat while consuming significantly less energy. However, evaporative cooling also drives higher water consumption, so it’s not ideal for data centers located in water-stressed areas. Data center operators must make cooling choices that balance the tradeoffs of energy efficiency and water efficiency. Also, data centers sometimes use non-potable water in cooling systems to help protect the limited supply of drinking water in a community.
Learn more about how cooling choices impact data center water consumption.
Heat exchanger: The equipment that transfers heat from the server room to the building-level cooling system. Different types of heat exchangers include computer room air conditioning (CRAC) systems, computer room air handler (CRAH) systems and coolant distribution units (CDUs).
Hot/cold aisles: A method of partitioning data centers with physical barriers to optimize airflow. The cold aisles feed a consistent supply of chilled air to the front of server racks. As servers operate, they generate heat, which is then vented out the back of the server racks into the hot aisle. Maintaining separation between cold supply air and hot exhaust air ensures that the data center operator doesn’t expend energy to chill air that doesn’t need to be chilled, thus improving efficiency.
Learn more about data center efficiency, including how it can impact operational sustainability.
Heat export: When residual heat from data centers is captured and transferred to a third-party heat network. It’s an example of the circular economy in action, as it repurposes and draws value from a resource that would otherwise go to waste. When the heat in question comes from data centers that have 100% renewable energy coverage—as more than 235 Equinix IBX® colocation data centers do—it provides a low-carbon heat source for local homes and businesses.
Learn more about data center heat export.
ASHRAE: The American Society of Heating, Refrigerating and Air-Conditioning Engineers. ASHRAE recommends a temperature range between 18° and 27°C (64.4° and 80.6°F) for enterprise-class data center equipment.[1] The typical average operating temperature across the data center industry is about 72°F, roughly midway between this range.
By moving closer to the higher end of this range, data centers could consume less energy while still keeping hardware at safe operating temperatures. Across a global data center platform, “adjusting the thermostat” by just a few degrees could drive significant efficiency improvements.
Learn more about how data center operators are evolving for wider operating temperature ranges.
Free cooling: Any method that enables data center cooling without consuming energy. In locations where the outside temperature is consistently cooler than the required operating temperature inside the data center, free cooling could mean simply taking advantage of this naturally cooler air instead of using an air chiller. Data center operators need to be flexible to take advantage of free cooling when and where it’s available.
Free cooling could also mean using naturally cool water sources. One example of this is deep lake water cooling (DLWC), where the data center draws cold water from the depths of a nearby lake and uses it for cooling purposes. The water can then be returned to its original source. Thus, it helps improve energy efficiency without increasing water consumption.
See DLWC in action at an Equinix IBX data center in Toronto.
That’s all for this edition of How to Speak Like a Data Center Geek. We hope it helped you understand why efficient data center cooling is so important.
For a deep dive on how Equinix is deploying next-generation liquid cooling technologies, check out our brief: Build liquid-cooled infrastructure at global scale.
Also, make sure to check out our other Data Center Geek posts for a look at some of the other systems and concepts that data center operators think about every day.
[1] Equipment Thermal Guidelines for Data Processing Environments, ASHRAE TC 9.9 Reference Card

