Chill Out with Liquid Cooling: Top Use Cases for Next-Gen Workloads

As compute-intensive workloads increase, liquid cooling can improve efficiency for wide-ranging applications

Alex Timofeyev
Chill Out with Liquid Cooling: Top Use Cases for Next-Gen Workloads

Liquid cooling technology is quickly growing in demand thanks to its ability to support a variety of cutting edge, compute-intensive workloads. In fact, more than a third of enterprise data centers are expected to employ some form of liquid cooling by 2026, according to a recent survey of IT professionals.[1] That’s nearly double the number who reported using it in early 2024.

With the rise of high-performance computing (HPC) and AI, the industry is seeing increased demand for compute power as well as denser data center deployments that use more power and emit more heat. Traditional air-cooling methods don’t have enough cooling capacity to handle the heat generated by these workloads. But liquid cooling makes it possible to have more compute power at your disposal while operating more efficiently.

As enterprises adopt next-gen servers, CPUs and GPU accelerators for a wide variety of applications, they’re exploring liquid cooling to improve cooling efficiency on compute-intensive workloads. In many cases, performant computing hardware has liquid cooling capability already built in. Liquid cooling is being adopted by data centers, enterprises and a variety of service providers to help them power and cool these highly performant CPUs and GPUs. The use cases can range from big-data analytics to user-centric edge applications. Let’s take a closer look at some examples and where you might put the infrastructure to support them.

Latency-dependent liquid cooling use cases

Liquid cooling enables companies to use compute power more efficiently. Thus, the most common use cases are applications that require the greatest compute intensity. But computing massive amounts of data also requires predictable, low-latency, high-throughput access to that data. To better understand different compute-intensive liquid cooling use cases, we need to understand what data we need access to and where that data lives.

Compute-intensive workloads at the core

Some compute-intensive, latency-sensitive use cases should be placed on your core IT infrastructure. These are the use cases that require seamless access to data lakes and data transfer across hybrid cloud architectures to enhance overall performance. Examples might include AI model fine tuning, multi-vector simulations or HPC clusters that are latency dependent to both cloud data and global networks. Because such applications require high-throughput, low-latency access to data, clouds and networks, you should put them close to your data storage, cloud and network ecosystem. A cloud-dense data center with global carrier access is ideal for these use cases, and liquid cooling can then be employed to support the required compute power.

Compute-intensive workloads at the edge

Other compute-intensive use cases need to be brought to the edge for the lowest-latency access to data or users that reside in edge locations. These applications are typically focused on delivering exceptional user experiences and leveraging real-time analytics from edge data. Autonomous vehicles, for instance, rely on IoT sensor data and traffic data from edge locations to make instant decisions, and they need to deliver great customer experiences. AI inferencing is another example: For AI applications like predictive maintenance in manufacturing or AI-powered healthcare devices, the inference engine needs to run close to the edge location—often in a data center within the same metro.

Ecosystem-dependent compute-intensive workloads

There are also some compute-intensive workloads that require connectivity to business ecosystem partners. These ecosystems typically involve multiple enterprises that need to share data seamlessly and quickly. need to be placed near those business ecosystems for the lowest latency. For example, in financial services, use cases such as high-frequency trading, payment processing and other banking applications require financial data to be shared across companies. Likewise, the airline industry and advertising sector rely on ecosystems to deliver real-time analytics and insights. Liquid cooling can support these compute and data-heavy workloads in locations where ecosystems already are.

Location-agnostic liquid cooling use cases

Not every compute-intensive workload requires low latency throughput for performant data access. For example, both large language model (LLM) training and some HPC examples require a lot of compute power as well as access to data. But that data access doesn’t always require ultra-low latency throughput to an external data source. Often, the latency sensitivity and throughput requirements for these applications is within the compute environment itself—with high-bandwidth, low-latency interconnection between servers and compute resources creating a sort of server mesh that enables multiple GPUs and CPUs to be used together to solve computational problems.

These environments benefit greatly from liquid cooling because it enables power-intensive GPUs and CPUs—which get really hot—to be placed in close proximity to each other, thus reducing the latency between computing resources and making the HPC systems more efficient.

When location and proximity to data are not priorities, enterprises typically look to place power-intensive liquid-cooled infrastructure in locations with the least expensive, most reliable access to renewable energy.

Deploy liquid-cooling infrastructure wherever you need

At Equinix, we’re seeing enterprise customers explore liquid cooling for a wide variety of business use cases. In addition, service providers are looking to liquid cooling to enable the next generation of data-intensive services. AI as a Service solutions are compute- and power-intensive and greatly benefit from the efficiency gains derived from liquid cooling. Hardware and chip manufacturers are deploying liquid cooling to test and innovate on their products. Telecommunication companies and network operators, as well as security services companies are also getting curious about the technology as they explore the deployment of AI within their services.

Platform Equinix® is a great place to deploy liquid cooled, compute-intensive workloads with high-throughput connectivity access across cloud, edge, business ecosystems and network backbones. With 250+ data centers around the world, Equinix can support such workloads at the core and edge locations you need. Our distributed footprint of data centers enables low-latency throughput to your edge devices and users. And we’re the platform where business ecosystems interact—including 3000+ cloud and IT service providers, 2000+ network service providers and 5000+ enterprises. We support advanced liquid cooling technologies—like direct-to-chip—in International Business Exchange™ (IBX®) data centers worldwide. Equinix is equipped to support your next-generation workloads on liquid-cooled servers.

To learn more about liquid cooling at Equinix, download the solution brief.

 

[1] Tobias Mann, More than a third of enterprise datacenters expect to deploy liquid cooling by 2026, The Register, April 22, 2024.

Subscribe to the Equinix Blog