3 Trends Driving Liquid Cooling for Data Centers

As IT infrastructure evolves to support more compute power in more locations, liquid cooling is an exciting data center technology

Ted Kawka
Lindsay Schulz
3 Trends Driving Liquid Cooling for Data Centers

The exponential growth of data and the development of advanced data analytics technologies have led to big changes in IT infrastructure. Infrastructure has grown ever-more powerful, and data centers are packing more computing power into smaller spaces. Even before AI was as popular as it is now, several developments in the marketplace were putting greater demands on data centers—changing hardware, the rise of edge computing, higher density deployments and the demand for greater efficiency.

As both workloads and hardware have changed, data center operators have in turn been innovating to support this more powerful equipment, which produces more heat per device and thus requires advanced cooling techniques. Liquid cooling—while not a new technology—is now revolutionizing how data centers cool the powerful, high-density hardware that supports emerging technologies. Because liquid cooling techniques—from augmented air to immersion to direct-to-chip liquid cooling—transfer heat more efficiently than air alone, liquid cooling is helping to address some of the big challenges of cooling high-density servers. Today, more server manufacturers are designing liquid cooling capabilities directly into their equipment, and data center operators are adjusting to meet that need.

Let’s take a closer look at three trends that are driving greater use of liquid cooling in data centers today:

  • Increased demand for compute-intensive workloads
  • Density and space constraints in data centers
  • The rise of edge computing

Trend 1: Increased demand for compute-intensive workloads

A rising number of organizations are taking advantage of applications like AI and machine learning to improve customer experience, enhance cybersecurity and transform business processes. Deploying these technologies and maximizing their value requires increasingly powerful infrastructure. AI model training, for example, is compute intensive, and the associated hardware has evolved to address the growing demands of AI and other applications.

But long before AI became a buzzword, the industry was moving toward more compute-intensive workloads. The amount of data organizations own has grown exponentially, and we’ve needed increasingly powerful tools and technologies to process that data into insights that drive business outcomes. Computing hardware has been evolving to increase compute capacity for many years.

Processors, for example, have been advancing for decades to support a higher volume of compute power:

  • CPUs: First, central processing units (CPUs) were added to computers to increase the compute capacity of machines. The first microprocessors were introduced in the 1970s.
  • Multicore CPUs: Then, in the 2000s, we figured out how to increase the number of processing cores per CPU to improve performance and reduce power consumption.
  • CPUs plus GPUs: Later in the 2000s, graphics processing units (GPUs) were introduced to handle graphics rendering tasks. Now, they’re used for a wide range of applications including machine learning due to their excellent ability to perform parallel processing. Today, modern computers combine CPUs and GPUs to offload processing power, and with AI/ML, CPUs and GPUs working in tandem drives up density.

Several other trends also contributed to the rapid rise in computing demand and greater power consumption:

  • Virtualization: Virtualization initiatives were introduced to clean up sprawling hardware environments, increase agility and create an abstraction layer between clients and the hosts they’re running on in order to better utilize hardware. This enabled IT teams to utilize more of the compute capacity of their hardware as opposed to designating a piece of hardware to a specific task or application.
  • Increasing security: Security processes like data analysis, encryption and decryption are CPU intensive and require a lot of power. As we build more security into applications, there’s more need for computing power.
  • Containerization: Containerization enables software to run agnostic of operating system by compiling all the necessary code and libraries into a “container.” Containerization is more compute resource efficient since it removes the OS layer, but it also enables practitioners to utilize more of their compute capacity. Ultimately, less compute is required to run an application, but practitioners are able to increase server utilization, therefore increasing power consumption per server.

All of these changes have increased the compute capacity and compute demands of hardware, and in turn the power consumed and heat generated by hardware. Liquid cooling is an important technology for addressing the rising data center temperatures caused by higher-density hardware.

Trend 2: Density and space constraints in data centers

Data center operators have also been innovating to accommodate ever more power-dense hardware. Traditionally, data centers have needed to accommodate a wide range of densities to support a variety of customer technologies. Power-dense cabinets, which generate more heat, would typically be spread out in the data center to distribute heat and address the cooling requirements. But as the volume of power-dense hardware increases, spreading out dense servers across multiple racks becomes impractical, inefficient and costly due to the added lengths of cabling. Furthermore, workloads such as high-performance compute (HPC) and AI require servers to be as close together as possible in order to reduce latency between compute resources. When data centers use liquid cooling, they can minimize the space between cabinets and hardware, placing power-dense servers within close proximity of each other. This creates a more efficient use of the data center space and enables HPC and AI workloads.

And because liquid cooling is more efficient at transferring heat than air, more of the power going to a cabinet can be dedicated to running compute instead of running fans. This effectively increases capacity without using additional power or space. When multiplied across the data center, liquid cooling can increase a data center’s compute capacity even while densities continue to increase.

Trend 3: The rise of edge computing

Another big change in the industry has been the paradigm shift to edge computing. Many enterprises have been in the process of moving away from centralized infrastructures to more distributed models. More data is being generated by end users in edge locations, and businesses have strong incentives to process data there, closer to its source, for the sake of lowering latency, improving performance and reducing network backhaul costs. For these reasons, organizations are putting more processing power at the edge. Faster network connectivity to edge locations has made it more feasible to put compute power at the edge, but this leads to the need for more efficient power and cooling in edge locations.

In the past, liquid cooling was primarily used in high density deployments centralized in major metros. But as edge computing grows, businesses need more compute power in edge locations—especially in industries like construction, oil and gas, and healthcare, where data storage and compute need to be located near end users. As edge computing continues to grow alongside AI, data centers in edge locations are supporting more compute-dense, power-hungry applications and infrastructure and therefore need more advanced cooling solutions.

Moving into the future with liquid cooling

Several liquid cooling technologies have emerged to address the need for more efficient data center cooling. Equinix has been an innovator in data center design for 25 years, and we’re working with customers across industries today to accommodate their high-density designs and address their cooling needs. We’re collaborating with liquid cooling technology vendors at our Co-Innovation Facility in Ashburn, Virginia. And in 2022, we put liquid cooling into action on our own production servers at one of our Equinix IBX® data centers in New York. As companies deploy more compute-intensive workloads like AI and ML, we will continue to evolve our data center designs to use cooling technologies that deliver efficiency and enable the high-performance deployments that underpin these future workloads.

Read more about how Equinix is evolving data center design in our white paper The Data Center of the Future.

Subscribe to the Equinix Blog