Wireless Temperature Sensor News

The more you in the data center, the better

Rack temperature monitoring: The secret to comfortable data center equipment

Servers certainly have some ventilation and self-cooling capabilities, but we would hardly call them warm-blooded. Every 1 degree Fahrenheit increase in ambient temperature yields a 1 degree F increase for the average CPU. In other words, there’s a clear correlation between data center temperature and rack equipment temperature.

When, exactly, does this become a problem? It varies by the equipment, but most CPUs are at risk of a meltdown if a server is allowed to operate at temperatures between 86-95 degrees F for more than a few minutes.

The majority of data centers aim for lower ambient temperatures, usually in compliance with ASHRAE’s recommended range of 64.4 and 80.4 degrees F (variance is influenced by factors such as humidity and dew point). This range is evidently below the CPU point of no return; however, data center temperature in modern high-density facilities is hardly static from rack to rack. Hot spots borne of airflow deficiencies and other disruptive conditions can result in isolated instances of critical equipment becoming at risk of overheating.

“Temperature monitoring isn’t just about what’s happening; it’s also about what could happen.”

Furthermore, data center temperature isn’t just about what’s currently happening; it’s also about what could happen. History is full of horror stories about CRAC failures leading to dangerous temperature spikes. And yes, running your servers at higher temperatures is more efficient; it saves money and the environment. However, operating closer to the edge means temperatures will rise to dangerous levels much faster in the event of a CRAC failure.

This isn’t to discourage data center managers from running equipment warm. Rather, it’s to encourage them to make sure they have temperature visibility needed to react quickly should they discover signs of rack temperature exceeding safe thresholds. Uncomfortable data center equipment won’t complain. It will just shut down, and take your critical operations with it.

Let real-time temperature monitoring do the talking

ASHRAE recommends installing a minimum of six temperature sensors per rack. Three will go in the front (at the top middle and bottom) and three in the back in order to monitor air intake and outtake temperatures. High-density facilities often use more than six sensors per rack in order to create more precise temperature and airflow models, which is highly recommended, especially for data centers operating at an ambient 80 degrees F.

Why? The simple answer is because you can’t discover a hotspot if you can’t see it. Real-time temperature monitoring that’s connected to your data center’s network will notify designated staff via SNMP, SMS or email the second a safe temperature threshold is exceeded.

And again, the more sensors, the merrier. It’s great to know that you’ll always have a real-time alerting system on your side. It’s even better to be able to look at a computer-generated model powered by many rack sensors, so you can trace the root of the deviation.

The more you in the data center, the better.The more you in the data center, the better.

Don’t let your servers catch a cold, either

Far fewer data center managers concern themselves with cooler-than-average temperatures given the amount of heat servers tend to generate. Nevertheless, letting temperature drop below 65 degrees F becomes risky for a different reason.

Lower ambient air temperatures can hold less moisture. Consequently, high relative humidity in a low-temperature environment will result in condensation. And as most of us know from fourth-grade science, water and electricity don’t play nice. Moisture will can make quick and irreversible work of a server’s CPU and motherboard.

Thus, it’s important to think of data center temperature as a balancing act. Allowing temperatures to drop without consideration for other environmental variables, namely humidity and dew point, will introduce undue risk to your equipment. Furthermore, there’s rarely justification for a cooling capacity that drops below 65 degrees F. The last thing your power usage efficiency (PUE) ratio needs is energy applied toward cooling your facility below recommended temperatures.

To avoid a situation where your servers “catch a cold,” make sure you supplement your temperature monitors with a network of humidity and dew point sensors. In coordination with your temperature sensors, facility managers will be notified in real time should relative humidity, or temperature, reach a level that introduces the risk of condensation. Conversely, if humidity levels are too low, the air may become dry enough to induce electrostatic charges that can damage sensitive electronic components.

Yes, your mission-critical data center equipment is high-maintenance. That probably won’t ever change. But with comprehensive data center monitoring, you’ll know exactly what your servers need the moment they need it.

Scroll to Top