There’s a quiet revolution in powering data centers.
A video service needs two things. Quality content is one. Of course, what constitutes “quality” is so subjective. Just guessing here, but we can’t imagine much overlap in the viewership of “Toddlers & Tiaras” and “Game of Thrones,” for example.
Which brings up the second requirement: volume of content. Different viewers are going to watch different things, and then they’re all just going to want more stuff like what they just watched but haven’t seen before. Ultimately, that leads to ever-growing libraries.
Then there are companies like Cablevision, Amazon and Apple that either are offering, or are planning to offer, cloud-based storage services for their customers. And then there are business customers, so add in data farms.
All of that is going to require more and bigger facilities for distributing, streaming, routing and switching.
That’s that much more power consumed, that much more radiated heat to dissipate or cool. It gets overwhelming. That’s why the entire electronics industry is rethinking almost everything about communications facilities of all types.
“If we were not to focus on energy efficiency, it will impede the growth of networking,” said Suresh Goyal, head of green research at Bell Labs.
Companies with big data centers are among those leading the way. Google was among the first highest-profile examples of the trend of companies with enormous communications needs building facilities in areas where energy management and cooling are easier.
In 2006, Google opened a data center in The Dalles, Ore., a small town on the Columbia River about 75 miles east of Portland. The company was drawn to The Dalles for a number of reasons, including relatively inexpensive real estate (including room to grow), a source of water for cooling (the river), a stable supply of electricity (care of the BPA, which operates the nearby Bonneville Dam) and fiber infrastructure.
Google and a group of server and communications equipment vendors founded Climate Savers Computing Initiative (CSCI), whose goal is to promote energyefficient IT infrastructure. The other board member companies are Cisco Systems, Juniper Networks, F5 Networks, Intel, Emerson Network Power, Dell, Microsoft and the World Wildlife Fund. Scores of other companies are involved.
PCs waste as much as 50 percent of the power they draw, and servers from 30 to 40 percent, according to the CSCI, which asserts that most of that loss can be eliminated with technology that exists today. That’s a foundational tenet of the group – that success is likelier with clearly achievable goals.
The group starts by encouraging compliance with Energy Star specifications and further recommends additional power-saving features. Including a sleep mode is the easiest, most obvious step – one that’s admittedly not always practical with network equipment, but sometimes it is.
The effort is leading to positive results. As of the end of last year, the CSCI estimated it was 60 to 70 percent on the way to reaching its goal of reducing annual CO2 emissions by 54 million metric tons by July.
Facebook is on a parallel path, with some interesting differences. Facebook recently decided to build its first data center in Prineville, a small town in central Oregon about 125 miles southeast of Portland. Again, steady energy supply and cheap land to spread out in were key to the decision about where to build.
Half of the projected 300,000-squarefoot facility is complete, with the second half scheduled to be finished at the end of 2011. The data center includes servers with new energy-efficient server designs and software that optimizes server capacity.
The facility includes rainwater reclamation, a solar energy installation for providing electricity to the office areas and reuse of heat created by the servers to heat office space. In addition, Facebook pushed the design of a low-energy evaporative cooling system, which makes use of the low humidity climate of central Oregon’s high desert environment to eliminate traditional air conditioners, the company said.
The data center uses 38 percent less energy to do the same work as Facebook’s existing facilities, the company said. It expects to get LEED Gold certification for the facility.
As to the more efficient servers Facebook mentioned, it has founded its own research initiative for communications infrastructure efficiency. The Open Compute Project is promoting specs for a simplified server chassis, motherboards (one sporting Intel processors and the other AMD chips) and a 450 W power supply. (These specs and others, including specs for the Facebook Design Center, are available on the Open Compute website.)
These measures are not the province of enormous conglomerates, either. BendBroadband, a small operator in Bend, Ore. (about 30 miles south of Prineville), just built a data center it calls the Vault that also pushes several green buttons that might reasonably have been expected to be pioneered by a larger MSO.
The facility employs KyotoCooling systems, which use outside air to cool the building 75 percent of the time. With an outside temperature of 75 degrees or cooler, this system provides free cooling. Solar panels generate 152 kW of power. Hot air containment means the data center minimizes the areas where it must apply cooling.
And yet another green initiative focusing on IT and communications infrastructure: The goal of the GreenTouch initiative is to increase energy efficiency in what the group is calling information and communications technology (ICT) networks “by a factor of 1,000 by designing fundamentally new network architectures and creating the enabling technologies on which they are based.”
Admittedly bold, but the organization believes that given information theory (the group quotes pioneering information theorist Claude Shannon at length), IT infrastructure is probably 10,000 times less efficient than it could be.
GreenTouch’s membership includes Bell Labs, Broadcom, Fujitsu, Draka Communications, NTT, several European service providers and a large contingent of universities from all over the world (there is minimal overlap with the membership of CSCI).
“Research tells us that we can produce and provide information in energy-efficient ways,” Goyal said.
Network architecture itself may have to be re-thought, he said. “We’re looking at watts per bit, and networks consume two to three orders of magnitude more energy at the edge than at the core. Since we’re looking at end-to-end transport, one of the things we’re looking at is how to make the last mile more efficient.”
Wireless technology needs to be 1,000 times more efficient than it is today, Goyal said. Every antenna, every base station blasts in all directions, he observed. They all could become more efficient if they directed their emissions.
Broadband and routing, meanwhile, could be 100 times more efficient. Even optical networking efficiency could be improved, perhaps by a factor of 10, he said.
Meanwhile, there are some simpler ways to handling energy efficiency and cooling that do not require technological breakthroughs, re-architecting entire networks or building new facilities 100 miles from a major airport. It is possible to increase facility efficiency, reduce power consumption and save money by retrofitting an existing facility.
In 2010, Verizon adopted a system designed to separate, contain and channel hot and cold air in 12 of its data centers so that it doesn’t have to cool the entire facilities. The PolarPlex system from Polargy redirects hot air from cold aisles, installs solid or plastic panels to contain temperatures, and uses special panels to fill empty cabinets and shelf positions.
Verizon said results of the data center containment include a 7.7 percent improvement in overall energy efficiency in the 12 data centers and 18.8 million kilowatt hours (kWh) annualized savings.
Alcatel-Lucent last year introduced a modular cooling system that uses refrigerating panels attached directly to cabinets, frames or racks in data centers.
A-L said its Modular Cooling system is anywhere from 11 to 40 times more efficient in transporting heat from the heat source to the building heat sink – typically a chilled water system.