Data center operators will go to almost any lengths to avoid an overheated server. Case in point: a financial institution in London suffered a power shortage during an extremely hot day one recent summer, and was left with no ability to cool its servers and storage.
"They went to the extent of having the fire department come and hose down the outside of the building [to cool it down]," said IBM's Rich Lechner, who is leading a data center efficiency project called Big Green.
That's certainly an unusual step, and probably something to be avoided, but data center managers throughout the world are missing opportunities to use less power, Lechner and other speakers said Wednesday during a panel discussion on data center cooling hosted in Waltham, Mass., by the Mass Technology Leadership Council (MTLC).
Data center energy consumption as a percentage of total United States electricity use has doubled since 2000, and data centers and servers will double their energy consumption again -- to 100 billion kilowatt-hours by 2012 at an annual cost of at least US$7.4 billion, according to Environmental Protection Agency statistics cited by MTLC.
With environmental damage an increasing concern worldwide, corporations are being motivated to reduce energy usage both by commitments to social responsibility and sky-high electricity bills.
"It is our No. 1 expense. I pay more for electricity than I do for rent," said Wayne Sawchuk, CEO and co-founder of ColoSpace, which provides co-location services in six data centers composed of more than 4,000 servers across 35,000 square feet in Massachusetts and New Hampshire. "I have a tremendous desire to reduce our electric bill every month."
IBM is redesigning its own data centers as it attempts to double its computing capacity within three years without increasing energy consumption. Big Blue is also focusing on customers, and next week plans to announce x86-based systems that "will essentially require zero air conditioning," Lechner said. IBM is using liquid cooling and putting thermal sensors inside servers, allowing fans inside the server to move at different speeds depending on need, he said.
Panelists discussed many data center efficiency dos and don'ts during the two-hour session. As most people know, virtualization of storage and servers can dramatically increase utilization rates, maximizing the capacities of each piece of hardware.
"One of the worst things we see in IT is [people saying] 'we have a new application, let's go buy a new server,'" said Thomas Humphrey, senior business development manager for APC-MGE, a power and cooling services vendor.
A smart user of virtualization technology constantly monitors server utilization rates, sometimes moving workloads off a little-utilized server so it can be shut down for the rest of the day, Lechner said.
The biggest challenge in cooling data centers today is the proliferation of blade servers, which use more kilowatts per rack, Humphrey said.
Tactics to improve efficiency include hot and cold aisle design, and putting the heaviest servers at the top of the cabinet (since heat rises and heavy servers are usually the hottest). Even if you're not building a new data center or undergoing a major redesign, there are numerous simple ways to improve efficiency. Make sure air conditioners are running efficiently, don't keep the temperature at 55 degrees when 68 is cool enough, and look under floor boards -- you might have a "rat's nest of cabling that's impeding air flow," Lechner said.