How to Speak Like a Data Center Geek

Data Center Cooling Continues to Evolve for Efficiency and Density

Liquid cooling offers efficiency and capacity benefits, but don’t expect traditional air-cooling methods to go away anytime soon

Matthew O'Gorman
Greg Metcalf
Data Center Cooling Continues to Evolve for Efficiency and Density

The debate about how data center operators should design cooling systems has been raging for decades now. Today, as businesses expect their data centers to keep up with growing workload density while also improving efficiency, this question feels timelier than ever. The truth is that there’s no one right way to design a data center, and new technologies only serve to drive home that point. Over the years, different cooling technologies and designs for data centers have evolved to meet our changing needs, and you can track that evolution on the pages of this very blog:

The distant past: Slab floors challenge raised floors

Way back in 2011, we covered the early debates between proponents of raised floor and slab floor data center designs. At that point, raised floor/underfloor cooling had been the established default for years, while slab floor/overhead cooling was still relatively new.

Even back then, we recognized there were factors in play other than just efficiency. There were tradeoffs around flexibility, cost and durability, all of which led us to conclude there was no one right answer to the question of raised floor versus slab floor for the cabinet densities of the time.

This was especially true for a global colocation provider like Equinix. Just as we do today, we had to meet the unique needs of many different customers in many different locations worldwide. Although we were an early adopter of slab floor designs—we’ve been using them since our founding 25 years ago—we have never agreed with the idea that raised floor data centers are obsolete. Even after all these years, we continue to use both raised floor and slab floor to meet the unique needs of different facilities and customers.

Yesterday: Fan walls enter the picture

In 2016, we revisited the topic of raised floor versus slab floor, but we added a third option: the fan wall. Rather than using underfloor or overhead cooling, a fan wall builds the cooling system into the walls of the data hall or data center perimeter. Air cooling units are positioned on the perimeter wall and use a plenum space to distribute the cooling to the data hall. The fans feed in the cool air to flood the cold aisles. Once the air passes through the racks, the exhaust is contained within the hot aisles and either funneled back to the cooling coils or vented out of the data center, depending on the specific technology used.

At that time, many people viewed the fan wall approach as a new technology that could help increase efficiency while also addressing issues like noise and maintenance costs. However, we argued that fan walls were still so new that it would have been difficult to properly evaluate them in terms of durability and long-term potential.

This position turned out to be prescient: The fan walls that many in the industry were considering back in 2016 were based on wall-mounted air-handling units (AHUs). These wall-mounted AHUs have ultimately proven not to be the long-term solution that many in the industry hoped they might be. One reason for this is that they can’t provide the same level of flexibility that new chilled-water cooling designs can. Unlike wall-mounted AHUs using air-to-air heat exchange, chilled water distribution gives data center operators control to deploy cooling more locally to where the heat demand is within the data hall and support higher cabinet densities.

Today: Air cooling efficiency continues to grow

In the years since we last explored this topic, we’ve seen both raised floor and slab floor design continue to cool data centers successfully. We’ve also seen new developments that have helped make air cooling more efficient in general, to keep up with the growing density of IT workloads.

One example is our own Cool Array technology, a new take on the idea of the fan wall design. Cool Array is used as part of an on-slab cold-flooded/hot-aisle containment scheme. It helps customers improve PUE and support very dense air-cooled workloads in the most efficient way possible. As a result, Cool Array can create footprint and fan power benefits that contribute to industry-leading power usage effectiveness (PUE).

Equinix is now using Cool Array technology in 30+ operating or in-construction Equinix IBX® colocation data centers, including our SG5 data center in Singapore. This proves that the technology can meet even the unique challenges of deploying digital infrastructure in Singapore, including cooling efficiently in a tropical climate and operating in a densely populated city where space and power constraints are a frequent concern.

Cool Array at Equinix SG5

Tomorrow: Liquid cooling deployment continues

In the past, choosing the right design for a data center only required comparing different varieties of air cooling. All these varieties still have a role to play in data centers today, and they’ve all experienced efficiency improvements that have helped them keep up with the latest challenges facing the industry.

At the end of the day, the various methods of air cooling are not fundamentally different from one another. The next development in cooling efficiency—liquid cooling—represents something new and different altogether. It’s revolutionary, not evolutionary.

Liquid cooling could completely redefine the density capabilities of data centers, for the simple reason that liquid is significantly better at transferring heat than air is. While it would be wrong to suggest that a high-density data center can’t be solved by air cooling alone, the industry as a whole will be shifting toward greater liquid cooling adoption in the years to come.

Liquid cooling offers dual benefits

Liquid cooling systems don’t have to dedicate power to running fans in the same way that air-cooled systems do. For this reason alone, it can offer two distinct benefits:

  • The system dedicates less power to running server fans, which allows for greater compute power.
  • The total amount of power dedicated to running fans across the data hall is lower, leading to PUE improvements.

Imagine you have a 40-kW cabinet deployed in a data center. In an air-cooled system, up to 30% of the energy you feed into that cabinet might go to powering the server fans, and would therefore not be apparent in PUE calculations. This means that the actual compute capacity of that cabinet would only represent about 28 kW. In contrast, the same cabinet supported by a liquid cooling system could dedicate as much as 39 kW of its power to compute workloads. In short, liquid cooling allows you to do more with the same power.

What’s next for liquid cooling?

The advent of liquid cooling is already upon us, but the rollout won’t happen overnight. What’s important for data center operators is that they maintain flexibility by making their facilities interchangeable between air-cooled cabinets and liquid-cooled cabinets. This ensures that when they need to deploy liquid cooling, they’re ready to do so quickly, without requiring significant retrofitting to existing facilities.

Just like air cooling, liquid cooling has its own variety of different approaches, including augmented air cooling, immersion cooling and direct-to-chip cooling. (Our recent liquid cooling blog post explores these approaches in detail.) Liquid cooling will also continue to evolve, with the goal of being able to run workloads warmer—and therefore allowing facilities to be more efficient. The ultimate goal we’re working toward is chiller-free liquid cooling.

Equinix is helping define the future of liquid cooling by supporting a number of different industry initiatives. One example is working to establish a standard coolant temperature for durable data center designs as part of an Open Compute Project (OCP) working group. This initiative identified 30°C as a coolant temperature that supports operating efficiency for today’s data centers while also ensuring that cooling designs will remain viable for future generations of ever-more powerful chips. Watch the video below to see a panel discussion from the OCP Global Summit featuring My Truong, Equinix Field CTO.

In addition, we’ve supported the development of the second version of the Open19 System Specification (Open19 V2), as part of the Linux Foundation Sustainable Scalable Infrastructure Alliance. This specification aims to ease the deployment friction of liquid cooling by defining a new open, industry-standard form factor for liquid-cooled racks and hardware. Access the Open19 V2 Specification on GitLab.

For the time being, we’ll continue to deploy a variety of different air-cooling and liquid-cooling methods to meet the needs of different workloads running in different locations. Rather than resolving the debates about the best way to design a data center for cooling efficiency, the advent of liquid cooling seems to have only further complicated them.

To learn more about how data centers will continue to evolve to become cleaner and more efficient—including but not limited to their cooling systems—read our white paper Data Centers of the Future.

Subscribe to the Equinix Blog