NextDC chief operating officer (COO), Simon Cooper, outlines the latest methods used in data centre cooling to meet the twin goals of increased reliability and improved energy efficiency.

While there is a multitude of factors that impact data centre design, the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) thermal guidelines are an accepted industry standard for data centres.

The underlying relationship between temperature and humidity is a key consideration for data centre operators in terms of infrastructure risk management.

ASHRAE’s latest recommendations now support increasing supply air temperature up to 27 degrees Celsius and managing relative humidity across a much broader band, allowing energy efficiency improvements without reducing infrastructure reliability.

Optimising temperature and humidity control within the applicable ASHRAE ranges is also intended to extend the life of hardware and deliver cost savings.

Air-side economy concepts, such as direct air-side cooling, are one of many cost-effective solutions to support large constant cooling loads. That said, air-side economisation is commonly associated with concern for humidity control and particle contamination (smoke, pollution). This means that operators need to apply careful control mechanisms and include suitable design constraints to ensure that savings are achieved.

On top of that, adequate access to outdoor air for economisation can also be a significant architectural design challenge, especially when the site also needs to support instantaneous emergency generator operation.

Other types of air-side economisation, such as hot and cold aisle containment, are a common and highly effective method of guaranteeing efficient cooling, as the volume of the supply air can be materially reduced to ensure the required air flow, at the correct temperature.

This requires the spaces between racks are segregated into ‘cold and ‘hot’ aisles. Cold air is then blown into the cold side, with blanking plates used to effectively seal racks top to bottom to prevent mixing or wasted cold air flows.

The hot air from the servers is discharged into the hot aisles and returned to the cooling equipment on a dedicated path, commonly through grilles in the ceiling, or out to the external environment. Corralling the hot air in this way creates an increase in return air temperature, which is itself a driver of efficiency in a closed (air) loop example.

A data centre operating in an open loop cycle (hot air expelled, cool air captured from the environment) eliminates some or all of the refrigeration cycle from the heat exchange process, reducing the amount of energy expended on mechanical cooling.

The simple method of hot and cold air separation described above creates a consistent temperature at the supply intake by preventing hot air from passing back and mixing with the cool air.

Another focus for efficient cooling is the use of in-direct air side economisation, where the coolant (typically called “chilled water”) has its temperature reduced by exposure to the outside environment via a heat exchanger, prior to requiring any mechanical cooling.

This type of energy saving solution eliminates most of the risks of open loop systems (described above) and its efficiency is further increased by raising the data centre’s target air temperature, as supported by ASHRAE.

A common challenge with aisle containment is maintaining temperature consistency within the cold rows. Today’s colocation operators make use of multiple sensors deployed throughout the data hall to ensure that airflows are performing as expected and hot-spots are quickly identified and mitigated.

The combination of more detailed monitoring, air-flow containment, economisation and, crucially, higher allowable temperatures, is what will achieve the twin goals of increasing service reliability and ever improving energy efficiency.

Ultimately, data centre efficiency is driven by a combination of factors but ASHRAE’s recommendation for infrastructure minimises the continuous need for mechanical cooling, whether direct exchange or traditional chiller-based.

In summary, the direct and indirect use of naturally cool ambient temperatures outside the data centre are perhaps the most common of these methodologies, to either pre-chill water for cooling, or to be filtered and humidity adjusted before being directly injected at the desired temperature.

All of these methodologies benefit from the provision of cooling airflows at the top end of the ASHRAE range, thus making the data centre business more efficient and sustainable overall.


comments powered by Disqus