TL:DR
-
Liquid cooling technology enables data centers to support high-density AI & HPC workloads.
-
Three liquid cooling approaches—augmented air, immersion & direct-to-chip—offer varying levels of server modifications & data center infrastructure changes.
-
Direct-to-chip liquid cooling leads adoption with comprehensive OEM solutions that fit standard cabinet footprints while supporting next-gen computing demands.
Editor’s Note: This blog was originally published in September 2023. It was updated in January 2026 to include the latest information.
Enterprises now have an unprecedented amount of data at their disposal. It is perhaps the most valuable currency of the digital economy. In turn, artificial intelligence (AI) technologies have exploded as powerful tools for processing that data to produce insights and drive business outcomes. Companies have made significant investments in AI readiness to gain competitive advantage, reduce costs, grow their business and improve efficiency. As evidenced by “AI-first” companies who rely on efficient data architecture and high-performance computing (HPC) systems, these technologies have revolutionized business strategies and priorities.
Computer hardware and chip manufacturers have kept pace with the need for data-driven competitiveness by continually improving their hardware for higher performance. AI model training requires a lot of processing power, and we have extraordinary processing capabilities today compared to just a few years ago. Moore’s Law refers to the historical trend of transistor density doubling roughly every two years, enabling more computing power in smaller footprints. While this trend has driven efficiency and performance gains, it is slowing due to physical and economic limits. As power consumption and heat generation rise with higher performance demands, data centers are transforming to adopt advanced cooling strategies to manage increasing power densities.
The data center industry is rapidly embracing liquid cooling technology to support high-density workloads. While air cooling has historically been sufficient for most workloads, companies are now exploring liquid cooling for its ability to transfer heat more efficiently than air. In fact, “compared to air, water is more than 23 times better at conducting heat (thermal conductivity) and can hold over 3,000 times more heat by volume (thermal capacity).”[1] As chip manufacturers incorporate liquid cooling into their next-generation designs, data center operators that lack these capabilities will be left behind.
Liquid cooling can mean different things to different organizations, and companies commonly serve different use cases with a combination of cooling technologies. The three most common approaches that our customers and partners have been leveraging to enable more efficient cooling closer to the rack are (1) augmented air cooling, (2) immersion cooling and (3) direct-to-chip liquid cooling. In this blog post, we’ll describe each approach, discuss things to consider before adopting them and look at the changes required from servers and data centers.
Augmented air cooling
Standard air-cooling technologies in the data center already employ some chilled water to function. For example, computer room air handlers (CRAHs) have a chilled water coil inside. The augmented air approach is about bringing that existing technology closer to the rack and therefore closer to the heat source. A rear-door heat exchanger (RDHx) is one way to achieve this. Chilled liquid goes through a coil in the rear door of the rack; that coil captures the heat from the equipment, delivering cool air back to the data center. In-row cooling (IRC) is another method. Coolers are placed between racks; hot exhaust air flows across the chilled water coil in the unit, cooling it before it is returned to the cold aisle. Technically, IRC and RDHx aren’t true liquid cooling because the chip at the server level is still air cooled, but these approaches enable greater air-cooling effectiveness in the 25 to 45 kVA/cabinet range.
Considerations
For many companies, augmented air cooling is a great first step since it involves relatively simple changes in the environment. With minimal disruption, these technologies can enable greater power density in the data center and allow more power-hungry hardware to be packed into a smaller space. However, as facility water temperatures increase to support more efficient data center operations, augmented air cooling is becoming less effective. As high-performance data centers have evolved, high-density workloads can now often be supported without the need for additional air-cooling infrastructure.
What changes are required?
No server-level changes are required when implementing RDHx and IRC. Typically, the data center extends piping infrastructure to the rack or in-row cooling unit. When implementing any of these approaches, it’s important to work with your facility provider to determine the best option and ensure compatibility.
Immersion cooling
Immersion cooling is exactly what it sounds like: Servers are immersed in a large vat of technical cooling fluid, like a big bathtub. In single-phase immersion cooling, the fluid stays in a liquid state. In two-phase immersion cooling, it changes to gas when it draws heat from the computer chips and then returns to liquid within the cooling loop.
Considerations
While immersion cooling has allowed organizations to achieve high power densities within the data center, it requires the most substantial physical changes to server technology and data center architecture. Because it’s a radical departure from traditional methods of deploying IT equipment, immersion cooling can have significant upfront costs and considerations, and we’re thus seeing far less adoption than with other liquid cooling approaches. Immersion typically involves large, heavy tubs of liquid that take up about three cabinet spaces. Depending on the approach, it can be challenging and messy to remove servers from the immersion container, so this cooling method may not be suitable for applications where frequent server moves, adds and changes are required. We highly recommend working closely with your immersion vendor, OEMs and data center provider if you’re contemplating a deployment.
What changes are required?
For immersion cooling, both server and data center changes are needed:
- Today, there are some servers built specifically for immersion, but others have to be retrofitted. If you’re retrofitting a server, work with your provider to ensure components like plastics, tapes and the optics used for networking are compatible with the fluid.
- Some single-phase immersion cooling systems integrate a coolant distribution unit (CDU)—essentially a pump that circulates the working fluid and controls the liquid temperature. The CDU connects to the facility water feed, pushing the heat from the tub out into the facility.
- Since servers are removed from immersion tanks vertically, you might need to implement infrastructure like cranes to assist. The data center also needs to manage the fluid and maintain its stability, preventing spills, evaporation and precipitation into equipment over time.
Direct-to-chip liquid cooling
Direct-to-chip liquid cooling (DLC) has been widely adopted by hardware manufacturers in recent years, making it by far the most prevalent liquid cooling solution out there. In direct-to-chip liquid cooling, inside the server, a cold plate sits on top of the chip. The cold plate is enabled with liquid supply and return channels, allowing technical cooling fluid to run through the plate, drawing heat away from the chip. As with immersion cooling, direct-to-chip can be single phase or two-phase, depending on whether or not the cooling fluid changes phase during the heat removal process.
Considerations
Today, there are numerous OEMs offering comprehensive rack-scale DLC solutions. Because DLC involves an interior augmentation of the IT equipment with minimal changes to the server exterior, DLC-enabled server racks can be installed in a standard cabinet footprint. However, DLC still requires architectural changes and additional equipment to deliver liquid to the cabinet and distribute it to the individual servers.
Figure 1: Direct-to-chip liquid cooling at an Equinix facility
What changes are required?
Direct-to-chip liquid cooling requires some server and data center changes:
- On the server side, if you’re retrofitting a traditional air-cooled server for DLC, a cold plate must be retrofitted in place of the heat-sink with piping that runs through the inside of the server and into ports accessible from the outside.
- A CDU is typically implemented to control liquid temperatures and flow pressure to the cold plate. CDUs can come in both floor mounted (in-row) and rack mounted (in-rack) configurations.
- In the rack itself, there’s typically a manifold—a liquid distribution unit that distributes cooling fluid to each rack unit to provide liquid to the server.
- You also need additional power strips for the increase in power density. Selecting 415V 3-phase power delivery can ease deployment pains. With new racks designed for liquid cooling, such as the ORv3 rack design, power may also be distributed through power shelves and an in-rack vertical bus bar.
With a growing number of direct-to-chip liquid cooled solutions on the market, it’s often no longer necessary to build your own DLC solution. You can work with your OEM on identifying an integrated rack-scale solution and focus on finding a data center that can support the requirements.
Innovating in the data center for high-compute business solutions
There are many reasons why companies might choose one or another option for liquid cooling. Usually, the needs of the specific workload will drive the architectural decisions about which servers to select and which vendors to partner with. Organizations also need the right data center partner who can accommodate new technologies at scale and anticipate future growth and innovation.
Equinix has been working with our customers and partners on cooling innovations for many years. From financial services providers to commercial retailers to digital media companies, many of our customers started with high-density air cooling, saw success with it and are now advancing into direct-to-chip liquid cooling. We have deployed liquid cooling in data centers across the globe to support wide-ranging use cases, from scientific discovery to financial technology innovation.[2]
Governments around the world are investing in AI infrastructure and exploring advancements in liquid cooling to support HPC technologies. For instance, the U.S. is now considering legislation to back liquid cooling research and development for AI data centers.[3] A recent growth estimate from JLL suggests that “we are in the midst of an infrastructure investment supercycle,” where AI could represent half of data center workloads by 2030 as 100GW of new capacity is developed.[4] As more organizations gravitate to liquid cooling to support AI and HPC workloads, Equinix will continue to innovate, co-create solutions and invest in technologies that optimize data center efficiency. As part of our “Build Bolder” strategy, our new data centers are natively designed to help customers harness state-of-the-art cooling solutions from the beginning. And we’re continuing to explore and test the next wave of liquid cooling technology in our Co-Innovation Facility in Ashburn, Virginia.
More than 10,000 companies rely on Equinix to help them enter new markets, scale their business and optimize operational efficiencies. We’re excited about the transformative opportunities that AI and HPC represent for the digital economy, and we’ll continue to evolve our facilities to support liquid cooling at scale.
Learn more about the advanced cooling technologies available at Equinix by downloading our solution brief, Build liquid-cooled infrastructure at global scale.
[1] Direct liquid cooling system challenges in data centers, Data Center Dynamics, March 25, 2025.
[2] Block Becomes First Company in North America to Deploy the Latest NVIDIA GB200 Systems for Frontier Models, March 12, 2025.
[3] Senators Back Liquid Cooling for AI Data Centers to Curb Water Usage and Costs, MeriTalk, November 26, 2025.