As data-intensive capabilities like AI and machine learning (ML) become more prevalent, data centers are innovating to address evolving requirements. AI necessitates more powerful chips and AI accelerators—all of which demand a lot of power to deliver optimal performance. In turn, chip density is driving increased data center rack density, with some AI solutions climbing to over 100 kW per rack in just the last few years. All of this generates more heat, so we need to look at advanced cooling technologies to enable these compute-intensive workloads.
Liquid cooling has reemerged as an important cooling solution to support high-density data center deployments. Because liquid can transfer heat much more efficiently than air, there’s growing interest in adopting a more efficient cooling technology at the server level. However, the sudden spotlight on liquid cooling has led to some misunderstandings of how this technology coexists with the data center.
In the age of AI, IT equipment is evolving at an unprecedented rate, and there’s no doubt that enhanced cooling solutions will be needed as compute-intensive workloads increase. As we look to enable these solutions, let’s debunk three common myths about liquid cooling and talk about what data centers are doing to prepare for the future.
Myth 1: Liquid cooling is the same as evaporative cooling.
Data centers need two key types of cooling—building-level cooling and server-level cooling. To optimize server performance, the heat generated within a data center (at the building level) is continuously transferred away from servers into a closed-loop data center cooling distribution system. Heat is then rejected out of the data center building either to outside air or exported to reuse within local communities.
At the server level, there are two established means of transferring heat from servers: air cooling or liquid cooling. With increased high-density deployments, liquid-cooled servers are becoming more prevalent. Typically, this involves an independent closed circuit of liquid that’s used to transfer heat to the distribution system.
Once heat is transferred into and through the distribution system, a variety of technical solutions exist to reject the heat to outside air. These are chosen to suit the constraints of the site and to optimize operational efficiency. Some involve evaporative cooling, where a separate water source is evaporated by the heat rejection plant to minimize electrical energy consumption. The chosen solution for heat rejection type is independent of the server technology.
Some people think that using liquid cooling for servers would greatly increase the overall water use of the building. But, given that water may or may not be used for a server-cooling liquid cooling system, this isn’t necessarily the case. Data center water consumption mainly relies on the building-level cooling system that’s selected. If evaporative cooling is used to cool the overall building, then water use will be much larger than if we didn’t use evaporative cooling. What liquid cooling allows is higher densities in the same amount of space. It enables us to compute more workloads with the same amount of building-level cooling that would be provided to air-cooled servers, making each workload more water and energy efficient. As liquid cooling becomes more prevalent in data centers, it will likely allow us to operate at higher facility water temperatures, enabling us to use water more efficiently at buildings designed with evaporative heat rejection.
Myth 2: Liquid cooling means we’ll no longer need air-cooled servers at data centers.
At the server-cooling level, liquid cooling and air cooling are sometimes positioned against each other, as if one must choose between the two. But both liquid- and air-cooled servers will coexist in data centers for the near future. Air cooling efficiencies are continuing to improve, and even though liquid cooling is an exciting and revolutionary step in data center cooling, air cooling is still expected to be the predominant method of cooling for standard compute and networking workloads for the time being.
In addition, many liquid cooling solutions today still have an air-cooled component. For example, direct-to-chip liquid cooling is often designed to cool newer generation CPUs and GPUs (and sometimes memory) that’s mostly associated with compute equipment. Other components of a server that generate heat are still cooled with air. Networking and storage equipment paired with compute servers are also likely to be air cooled. So, the reality is that, even as look to the future of equipment, a mix of cooling technologies will be in use, and data centers must have the flexibility to support various workloads, server generations and cooling solutions.
Many of the liquid-cooled deployments we see at Equinix are roughly 80–85% liquid-cooled and 15–20% air-cooled. For perspective, on a 100-kW rack, if 20% is still air cooled, that’s within the building-level cooling capabilities of many Equinix data centers today. However, if we see rack densities continue to increase, a combination of liquid-enabled augmented air cooling technologies may need to be installed to support the air side requirements as well.
At Equinix, we’re prepared to augment our server-level air-cooling systems with things like rear door heat exchangers and in-row coolers to take care of the residual air-cooled load in ultra high-density liquid cooling deployments—and we may continue to need this—even as we move toward a future that increasingly relies on highly efficient liquid cooling.
Myth 3: Retrofitting data centers to support liquid cooling is impossible.
While the industry faces new challenges with adapting existing data centers to newer cooling technologies, data center providers are already at work implementing changes to support server-level liquid cooling. Existing facilities with a chilled water system can be retrofitted to support liquid cooling technologies.
Retrofitting existing data centers to add liquid-cooling capabilities requires some changes, such as extending facility piping and adding more chiller infrastructure as well as the power to support added density. It also necessitates rigorous policies and procedures for managing liquid within the data center halls. After all, historically, data centers have been mostly liquid-free around customer deployments. Customers expect the same performance and risk mitigation regardless of what type of cooling they employ. At Equinix, we’re working with our operations teams on updated policies and procedures to ensure that introducing liquid cooling doesn’t increase the risk profile for any customers in our data centers.
Retrofitting existing sites isn’t impossible, but it takes significant engineering and expertise to consider the power, cooling and risk management profile at a given facility. Given this fact, it’s important for to take a reliable partner along with you on the journey as you explore moving to higher density workloads and utilizing liquid cooling technologies.
Explore liquid cooling at Equinix
Organizations come to Equinix with varied cooling requirements for their IT infrastructure, and we’re committed to providing flexible, innovative solutions in Equinix International Business Exchange™ (IBX®) data centers to meet each customer’s cooling needs. We’re ready to implement augmented air cooling options to support high-density air-cooled cabinets, as well as provide liquid cooling infrastructure to support direct-to-chip equipment. We offer both liquid to the cage and to the cabinet, and we have trusted experts who can help with liquid cooling solution design.
Learn more about how Equinix is rolling out liquid cooling in our data centers in the IDC paper Equinix Advances Private AI Infrastructure and Liquid Cooling Technologies.