Top 3 Myths about Data Center Operating Temperatures

Thanks to innovations in data center and hardware design, data centers are trending toward warmer temperatures and greater energy efficiency

Marcus Hopwood
Top 3 Myths about Data Center Operating Temperatures

As the pace of digital innovation accelerates, data centers play an ever-more critical role in communication and connectivity, digital experiences and economic growth. The demand for data center capacity is increasing because of data growth, AI and other high-performance computing (HPC) applications. Consequently, data center power consumption is also on the rise. Since 2010, global electricity demand for data centers has roughly doubled.[1] This trend means that energy efficiency is a critical focus area for data centers as we move into the future. To operate responsibly, data center providers are exploring ways to achieve the greatest possible energy efficiency while also supporting optimal equipment performance and reliability.

Because cooling accounts for a substantial portion of data center power usage, fine-tuning facility operating temperatures is an important avenue for increasing data center energy efficiency. Both data center operators and technology providers have come up with innovative approaches to managing data center environments more efficiently—whether that involves design strategies like separating hot and cold air flow on the data center floor, introducing cutting-edge liquid cooling or creating more temperature-tolerant equipment. Even small adjustments can make a big difference when applied at scale.

In truth, data center operating temperatures have been slowly rising for decades as industry standards evolve and equipment is designed for greater efficiency. However, common misconceptions and fears about adjusting data center operating temperatures have persisted.

Let’s take a closer look at some of these temperature fallacies.

Myth 1: A data center must be kept extremely cold

One of the most common misunderstandings about data center temperatures is the belief that facilities need to be very cold to keep hardware safe. This idea is a relic of the past: Early data centers were kept quite cold because operators believed that colder rooms were safer for delicate hardware.

It’s true that not long ago, the recommended operating temperatures for some hardware systems were relatively cold. For instance, in 2015, an IBM FlashSystem 900 had a recommended operating temp of 5°C (50°F).[2] Today, a newer IBM FlashSystem 5200 can operate at a much wider range of 5–35°C (41–95°F).[3]

While earlier servers did need to run in colder rooms, the tendency to over-cool data centers also led to high energy costs and energy waste. Both hardware and data center cooling capabilities have evolved a lot in the last few decades. When the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) first published its temperature guidance for data centers in 2004, the recommended range was 20–25°C (68–77°F), with higher allowable temperatures for short time periods. These guidelines have been continually updated as server technology and energy-efficiency priorities changed. The most recent guidelines have a wider recommended range of 18–27°C (64–81°F), with much wider allowable temperatures for specific periods and classes of equipment.

It’s clear that average data center operating temperatures have been safely rising for a long time, and facilities no longer need to be extremely cold. This is due, in part, to innovations in data center design for greater cooling efficiency, such as:

  • Removing raised floors to reduce disruption in the flow of cold air to IT equipment
  • Hot and cold aisle containment to reduce the mixing of air
  • Using blanking panels and sealing cabinets to prevent hot air circulation
  • In-row cooling directly between server racks for more targeted cooling in the hottest spots
  • Liquid cooling to minimize energy required for cooling ultra-dense deployments
  • Better data center monitoring technologies to allow for real-time adjustments

Myth 2: New AI hardware requires colder air temperatures in data centers

IT hardware has been advancing rapidly for AI and HPC applications. These systems have significantly higher power densities and consequently have higher heat output than legacy equipment. Some people believe that adding such equipment to a data center will increase air temperatures and/or cause major temperature fluctuations that make the environment too hot. As a result, they think data center operating temperatures need to be colder to accommodate AI and HPC equipment.

However, the latest, most powerful air-cooled servers on the market can operate at the higher temperatures that are becoming more standard in modern data centers. For example, NVIDIA DGX H100 and H200 servers have an operating temperature range of 5–30°C (41–86°F).[4] Dell PowerEdge XE9680 servers have an operating temperature of 10–35°C (50–95°F).[5]

These newer systems rely on higher air flow, not necessarily colder air. So, data centers need to ensure they can supply the right volume of air to this equipment. A lot of newer systems are also starting to use liquid cooling, which reduces strain on air cooling systems.

While the fifth edition of ASHRAE data center temperature guidelines did introduce a specific equipment class with tighter temperature envelopes for some high-density hardware, most manufacturers continue to specify the A1 and A2 ranges that enable data centers to protect hardware performance while optimizing energy use.

Myth 3: Data centers always operate at the highest end of the allowable operating range

If allowing warmer temperatures improves overall energy efficiency, some people then assume that data centers will always keep the environment at the very top of the allowable range. The truth is, there are many factors influencing data center temperatures, and data center operators try to stay within the recommended range to provide a buffer for temperature fluctuations.

Most of the time, data centers operate within the ASHRAE recommended temperature range for enterprise-class equipment. The reason it’s a range is that it allows operators flexibility as they work to maintain a safe, reliable, high-performance environment. During transient events, such as when power systems change from utility power to generators or during certain kinds of maintenance, they can use the allowable range for short periods, safe in the knowledge that IT equipment will not be impacted.

To optimize energy usage, data center providers are working on ways to ensure reliability and optimized equipment functionality while operating at higher temperatures. Advancements in not only data center cooling technologies but also data center infrastructure management solutions are making it possible to fine-tune operating temperatures for greater efficiency.

Balancing hardware optimization and energy efficiency

In 2022, Equinix became the first colocation data center operator to commit to more efficient temperature and humidity standards. We’re slowly and carefully operating select facilities at warmer temperatures to reduce the energy used on air cooling.

Case study: Equinix IBX® data centers in the Netherlands

We’ve been running our facilities in the Netherlands at warmer operating temperatures to align with a national government mandate to maintain IT space supply air temperatures at a minimum of 27°C. We’ve implemented these changes gradually, and it’s been a valuable exercise in seeing where we can make energy efficiency improvements that are compliant with local regulations and help customers realize energy savings and CO2 emissions reduction.

Equinix will continue innovating and following best practices to further our environmental commitments. By making small changes to our operating environment, we can improve our efficiency while maintaining conditions that allow for the safe and reliable operation of our customers’ workloads.

You can learn more about how Equinix is addressing energy efficiency in our Sustainability report.

 

[1] Harshit Agrawal, Data Center Energy Consumption: How Much Energy Did/Do/Will They Eat? Yale CampusPress, August 4, 2025.

[2] Thermals in a FlashSystem, IBM Support.

[3] IBM FlashSystem 5200: Hardware Guide.

[4] Introduction to NVIDIA DGX H100/H200 Systems, NVIDIA Docs.

[5] Dell PowerEdge XE9680 Technical Guide, Dell Technologies.

Subscribe to the Equinix Blog