Thursday, 07/11/2024 | 23:05 GMT+7

New server technology reduces the power required to save energy

13/04/2015

A number of efforts have been made to reduce the impact of data centers, both by using renewable energy to drive them, and to improve their efficiency.

Knowing more about a situation can allow you to address it more efficiently. That’s one of the promises of the Internet of Things. Whether it’s the precision application of water and fertilizer in agriculture to a smart phone app that lets you adjust your thermostat at home if your plans change, knowledge becomes power saved so long as we have the means to act on that knowledge.

But all that energy-saving knowledge can take a lot of energy to store, distribute and access. In fact, data centers, which you can think of as the little men behind the curtain that we now know as “the cloud,” suck up a great deal of energy. In 2013, they consumed a staggering 91 billion kWh in the US alone. That’s enough to power every household in New York City. That amount is expected to grow by half again by 2020.

A number of efforts have been made to reduce the impact of these data centers, both by using renewable energy to drive them, and to improve their efficiency. Earlier this year Apple announced plans to spend $2 billion on a solar “command center” in Arizona. Nearby solar plants with 70MW of generation capacity are expected to meet 100% of the energy required to run the center.

Opportunities to improve the efficiency of data centers have also received a lot of attention and innovation. The servers tend to run hot, which is why more than half of the power they use is devoted to cooling. System architecture studies reveal more opportunities to save energy everywhere from the silicon, to the OS, to the applications, through the infrastructure and all the way to the building. Studies call for the consolidation of metrics, a streamline decision-making process that aligns incentives with investors, and disclosure of performance data.

The metric that is currently most popular is called Power Usage Effectiveness (PUE). It is the ratio of the total power consumed, which includes cooling and distribution, compared with the IT power needed strictly for data processing. The best case would be a PUE of 1.0. Google, for example, has a PUE of 1.12 across the enterprise, which essentially means, a 12% power overhead.

Technologies continue to improve as well. Just recently, a device called the Ciena 8700 Packetwave Platform, a type of switching device for data centers won an Environmental Leader product award for its massive energy savings. For those that are interested, it is a multi-terabit programmable Ethernet-over-DWDM packet switch. It enables efficient interconnection between data centers. It uses half the energy of competing solutions, while providing twice the bandwidth at half the size. This will allow data providers (which today is almost every business) to save millions of dollars while reducing their carbon footprint.

Efficient hardware like the Ciena 8700 reduces the amount of waste heat that must be eliminated (which shows up in the PUE) as well as the amount of computational power required (which doesn’t).

As the cloud, with all of its opportunities and distractions, continues to grow, providers will continue to develop new ways to reduce the impact of individual servers, even as the number of servers continues to grow. While some fret over the amount of energy being devoted to this undertaking, I believe that the net energy benefit provided by “letting one’s fingers do the walking,” will be considerable.

Anh Tuan