Handling the heat of data centres
Data centres are big business. Globally, it is estimated that this essential IT infrastructure accounts for 1.5% of the total global power consumption, which is equivalent to 50 power stations, and that number is rising rapidly.
Data from multinational technology developer Intel, however, shows the effects of data centres go far beyond putting a strain on the electricity grid. Presently, these server rooms generate 210 million metric tonnes of carbon dioxide (equivalent to 41 million cars), use approximately 300 billion litres of water (an amount that would fi ll nearly 250,000 Olympic-sized swimming pools), and cost companies $27 billion per annum.
ADVERTISEMENT
These troubling numbers are expected to double by 2014.
One solution to this problem being touted by Intel is that of high ambient temperature data centres – a server room that features a higher operating temperature to decrease cooling costs and increase power efficiency.
In the past, data centres were cooled to between 18-21ºC for many reasons. Sometimes it was overengineered hotspot avoidance, other times it was the result of legacy system issues, SLAs and warranties.
These days, however, new IT technologies can handle heat better than ever before. It’s just a matter of creating a better power usage effectiveness (PUE), which is found by dividing the total data centre by the actual IT power.
“Servers are much more resilient than people give them credit for. Further, the industry is starting to build more and more resilient servers that can operate at higher temperatures,” Intel Data Centre and Connected Systems Group APAC and PRC marketing director Nick Knupffer says.
“Most of our existing product already operates at the higher temperatures, so why not use it? If you’re currently running a data centre at 18ºC, there’s no reason you can’t run it at 27ºC or even higher.”
That said, Nick explains that when you start to look at running a system over 35ºC, you need to look a lot more closely at the system you’re buying.
“If you increase the operating temperature, you’re going to wear down your components more quickly. But the cost savings of the energy use far outweigh any increase in failure rates by a huge factor.”
And the benefits extend beyond a reduction in operational costs and environmental relief. In a 15MW data centre, for example, actual construction costs for a data centre with a PUE of 1.25 are 29% lower than one with a PUE of 3. It will also allow roughly 25,600 more servers in the room while still reducing cooling costs by 85%, if operating at 35ºC.
To put this theory of high ambient temperature data centres to the test, Intel’s power management technologies – Node Manager and Data Centre Manager – were jointly tested over three months by Korean communications provider KT, at the existing Mok-dong data centre in Seoul, South Korea. The objective was to maximise the number of servers within the space and lower the power and cooling constraints of the data centre. The results showed that a PUE of 1.39 would result in approximately 27% energy savings and could be achieved by using a 22ºC chilled water loop; Node Manager and Data Centre Manager made it possible to save 15% power without performance degradation; in the event of a power outage, the uninterrupted power supply (UPS) uptime could be extended by up to 15%; and potential additional annual energy cost savings of greater than $A1,900 per rack were available by putting an under-utilised rack into a lower power state.
Another concept that is making these savings possible is that of ‘free cooling’, which is akin to opening a window instead of using air conditioning. Free cooling bypasses the traditional air conditioner by pulling in cold air from the outside, cycling it through the server racks, and flushing it back outside.
Intel APAC cloud solution architect Leif Nielsen explains there is still room for cablers and data centre designers to make money on a high ambient temperature data centre by moving from a chilled water plant system to an economiser (fan) system.
“You still need to design a data centre to capitalise on free cooling, so there’s still money to be made,” he says.
“You just need a shift in focus to instrumentation and sensors, to ensure the inlet and outlet temperature of the server and the airflow over the server is correct, and to make sure the whole building is all working together.”
An air economiser model could potentially provide nearly all data centre cooling requirements, substantially reducing power consumption. For a 10MW server room, a switch to an air economiser could result in savings of $A2.74 million, without a dramatic rise in server failure rates. In fact, in a trial of an air economiser data centres conducted by Intel, despite the dust and variation in humidity and temperature there was only a minimal difference between the 4.46% failure rate in the economiser compartment and the 3.83% failure rate in another data centre over the same period.
So, what would happen if the world increased the ambient temperature in data centres by 5ºC? Well, we would be better of by $2.16 billion in immediate annual power savings.
There would be an 8% decrease in global data centre power consumption, a saving of 24.3 billion kWh (which is more than a month of total Australian energy consumption), and it would save 49 billion litres of water and 1.7 million metric tonnes of carbon dioxide emissions.
-
ADVERTISEMENT
-
ADVERTISEMENT