TL:DR
- AI workloads drive power density from 5-10 kW/rack to 100 kW/rack, requiring liquid cooling systems to manage heat from GPUs up to 1.2 kW each.
- Focused on sustainability, the implementation of renewable energy sources within global data centers aligns with high operational efficiency standards, setting a benchmark for industry-wide energy practices.
- Modern data centers are purpose-built with AI in mind, enabling seamless interconnection across global markets to provide efficient, low-latency performance for diverse workloads.
Business leaders are working to implement AI strategies that help their organizations operate smarter and more efficiently. But even the people tasked with capturing the business value of AI don’t always understand the technical elements enabling that value.
As impressive as today’s machine learning algorithms are, they’re not magic. They’re built upon real hardware running inside real data centers. In fact, many organizations are facing an infrastructure mismatch: The data centers that served them well in the past haven’t kept up with the demands of emerging AI workloads.
Enterprises need high-performance data centers designed with AI in mind. Let’s look at the factors that separate AI-ready data centers from conventional data centers.
Data centers are evolving for higher power density
Wider adoption of AI is driving higher power density in data centers. Power density is a measure of how much power is used in a given space. Not only do GPUs use more energy than traditional hardware on a per-unit basis, but they also need to be packed closer together to minimize latency. Therefore, data center operators use much more power within the same physical rack footprint to support GPUs.
It’s remarkable just how quickly power density per rack has increased in the data center. Up until a few years ago, 5–10 kW/rack was standard. Now, we’re commonly seeing density as high as 100 kW/rack. This rapid shift has impacted many of the infrastructure elements found in data centers, starting with cooling systems.
Liquid cooling
This new 100 kW/rack trend is driven by the development of increasingly dense generations of GPUs. We’re now seeing GPUs of up to 1.2 kW each, meaning that a single processor accounts for roughly one-fourth the density of an entire legacy rack. In addition, higher quantities of GPUs are being built into a single rack footprint to accelerate their ability to make connections and run more complex models. This kind of density packed into a single rack produces much more heat than legacy racks. Therefore, these racks need a more powerful cooling solution. This is where liquid cooling comes into play.
Since liquid is much more efficient at moving heat than air, new cooling methods that use water-based solutions or refrigerants enable much higher power density than traditional air-cooling methods. The rollout of liquid cooling in data centers will be an essential part of supporting cutting-edge GPU workloads and enabling emerging applications that leverage AI.
However, liquid cooling isn’t a cure-all for the density challenges facing businesses, and it won’t completely replace air cooling. Today, even very dense racks still include an air-cooled component. That’s because enterprises must support various components within their AI stack, and each of those components will have different cooling requirements.
For instance, today’s GPU racks are mostly liquid cooled, but still use a small percentage of air cooling. The 100 kW/rack systems mentioned earlier may have an 80/20 split between liquid cooling and air cooling. In this case, the air-cooled component alone would consume 20 kW of power—several times more than the entire rack would have consumed just a few years ago. On the other hand, storage and networking racks that supplement the GPU compute workloads are still (for the time being) 100% air cooled. Organizations need to think about how to integrate liquid cooling while continuing to account for high-density air-cooled loads.
As a global colocation provider, Equinix understands the growing demand for liquid cooling, and we’re designing our high-performance data centers specifically with liquid-cooling infrastructure in mind. This allows our customers to deploy AI hardware with ease, something that traditionally would have been quite challenging to do inside their own on-premises data centers.
Energy and sustainability
With all the increasing power and cooling demands of AI, many enterprise leaders need to consider how they can support these power-dense workloads without erasing the progress they’ve made toward their sustainability goals.
The primary place to start is ensuring facility power is backed by renewable energy sources whenever possible. In fact, this has been a top priority for Equinix as we work to support our customers’ high-density workloads. In 2024, we achieved 96% renewable energy coverage across our global data center portfolio. We continue to work toward our goal of 100% coverage by the year 2030. To achieve this, we’re using a multifaceted renewable energy strategy that includes signing power purchase agreements (PPAs) to support new wind and solar projects.
We’ve also set a goal to reduce our Scope 1, 2 and 3 greenhouse gas emissions by 90% by 2040, and we’ve verified our goals with the Science Based Targets Initiative (SBTi). This effort allows customers that leverage the Equinix ecosystem and infrastructure to feel confident that their AI workloads align with their sustainability initiatives, rather than hindering them.
But the focus on sustainable practices doesn’t stop there. AI-ready data centers should also prioritize improved operational efficiency, which can have long-term sustainability implications. At Equinix, we’re pursuing this by phasing in ASHRAE A1 Allowable standards across our data center portfolio. This practice allows us to continue to run our facilities within the A1 Recommended range but at a slightly higher operating temperature within that range than is standard. This could help us save significant amounts of operational energy across our global footprint in the long term.
In addition, GPU workloads can even accelerate these efficiency efforts by leveraging the inherent benefits of liquid cooling. Due to its enhanced thermal conductivity capabilities, liquid cooling can operate using higher temperatures to cool power-dense workloads than that of air-cooled systems. As facilities see growth in the overall percentage of liquid-cooled workloads, this could allow them to operate more efficiently. As long as chip temperature standards remain high, AI technology could allow more efficient operational practices to become commonplace, including the way heat is rejected from the facility.
Water
The impact of AI adoption on water consumption is another important aspect of data center sustainability. Though AI workloads leverage “liquid” cooling to cool servers, the technology does not drive significant increases in water consumption. That’s because they use a closed-loop circuit that’s connected to a heat exchanger or coolant distribution unit (CDU).
However, heat still has to go somewhere. After it goes through the CDU, the heat is transferred to a building-level cooling system, which then removes the heat from the facility altogether. At the building level, data center operators choose between two cooling options:
- Evaporative cooling releases heat from the data center in the form of water vapor.
- Air cooling, also known as dry cooling, releases hot air from the data center.
Figure 1: Building-Level Cooling System
Evaporative cooling drives higher water consumption than air cooling, but it also consumes much less energy. A global data center operator like Equinix must weigh the tradeoffs of energy and water consumption on a case-by-case basis. For instance, we avoid using evaporative cooling in water-stressed areas, to ensure that more water remains available for use in the community.
As mentioned previously, because liquid cooling at the server level is more efficient, increased adoption of AI workloads may allow us to operate at higher temperatures. This could reduce the need for water consumption in evaporative cooling systems and open the door to use dry cooling in more markets. Also, this may provide more opportunities for us to participate in data center heat export projects, where we capture residual heat from our facilities and make it available for heating homes and businesses in the local communities in which we operate.
AI-ready data centers are interconnected data centers
Enterprise leaders increasingly recognize that there’s more to AI than just large core data centers with high GPU capacity. There are different varieties of AI-ready data centers used for different purposes. Distributed AI has become the norm, and enterprises need to capture data from many different sources and support inference in proximity to users at the edge. They’ll also need to connect with an ecosystem of partners to get the data, models and infrastructure they need to drive AI success.
For all these reasons, connectivity is an essential part of what makes data centers AI-ready. High-performance data centers are strategically located near population centers where end users and data sources are likely to be found, thus enabling the low-latency connectivity that inference workloads demand. In addition, these data centers have become digital hubs where ecosystem partners gather and interconnect with one another. This means that enterprises don’t have to choose between having data centers in the right locations to support their different AI workloads and having easy access to their AI ecosystem partners. The right colocation provider can help them meet both these needs.
Finally, an AI-ready data center needs to provide advanced networking capabilities to keep AI hardware running at its full potential. For instance, GPUs are highly sensitive to latency and are thus designed to be connected point to point. The amount of physical interconnection bandwidth required to make this happen is astronomical. Only facilities that are designed with dedicated overhead or underfloor space will be able to support the sheer volume of fiber that AI clusters demand.
Equinix IBX® colocation data centers are located in 76 markets worldwide, so you can stand up your AI infrastructure wherever you need it. Also, Equinix is home to 10,000+ customers, including everyone from established cloud providers to emerging AI specialists. When you’re colocated with so many different service providers, it’s easy to find the right partners for your AI strategy and interconnect with them to exchange data quickly and securely.
At Equinix, we believe that GPUs and other advanced hardware are the engines that drive AI forward. High-performance data centers are the engine rooms where hardware can run to its full potential. Without the right hardware and the right data centers to support that hardware, AI stays parked.
Learn how high-performance data centers are driving AI forward: Read the white paper The engine of AI powering innovation at scale.
