Storage Virtualization for reducing power and cooling
by Hu Yoshida on Jan 15, 2007
Moore’s law is based on the observation that chip components , and consequently, compute power, has been doubling in density every 18 months. The benefit of this has been faster, lower cost, compute power. One of the negative consequences of this increase in components and compute power, is the increasing requirement for electrical power and cooling. Many data centers that have been focused on increasing compute power, without consideration for electrical power consumption and cooling are awakening to the reality that the availability of electrical power has a hard limit.
Storage has followed a similar growth in technology. In this case the technology has resulted in a doubling of capacity every 1 1/2 to 2 years. The result has been a 30% to 35% per year decline in storage costs. Hitachi Global Storage Technologies recently announced that they will break the 1 TB disk barrier this year. While there are dark sides to this cost capacity increase as David Merrill points out in his recent post on storage performance, storage technology increases have the opposite effect on power and cooling. A 3 1/2 inch disk drive with 1 TB will consume about as much power as a 3 1/2 inch drive which contained 9 GB six years ago.
Six or seven years ago, the majority of the floor space in a data center, was taken up by storage frames that contained less than 2 TB. Today, storage frames contain 10 to 30 TB or more, and consume a fraction of the space, power and cooling. Even with this efficiency, storage virtualization can be used to gain even more savings in power and cooling.
The most obvious benefit of virtualization is the ability to increase utilization of existing storage. Virtualization can be used to pool storage and recover unused or stranded storage and move less active data to larger capacity disks. Increasing storage utilization from 30% to 60% will have a corresponding saving in electrical power and cooling. A less obvious way to reduce power and cooling with virtualization is the ability to dynamically migrate data across storage frames.
Dynamic migration of data will enable IT to replace older technology storage disks with higher capacity disks without disruption to the application. Consolidating to fewer disk increases power efficiency. Often applications are stuck on lower capacity disk media because the down time required for migration is too disruptive to the business process.
Since IT managers have been focused on application efficiency, they often configure their floor space for that purpose without consideration for the impact on power and cooling. It is not uncommon to see a data center where racks of modular storage are planted one behind the other all facing one direction so that the control panel lights can be seen from one spot. Unfortunately the rack in back is breathing the exhaust heat from the rack in front. The best configuration for cooling is to configure the racks back to back or front to front, creating hot rows and cool rows, instead of trying to keep the entire room at one low temperature.
Even if the facilities people point this out, once the storage racks are “planted” or loaded with application data, it becomes too disruptive to stop the application and reposition the rack. That’s where virtualization can help, by migrating application data to temporary spare racks while the other racks are reconfigured to optimize power and cooling.
If you can’t get any more power into your building, virtualization can also help you to migrate your applications data to another data center, to another power grid, or even another geography where the energy source may be more reliable or lower cost.
If you are concerned about power and cooling, consider using storage virtualization to help address it.
[...] Storage Virtualization for Reducing Power and Cooling Posted by David Marshall on Monday, January 15, 2007 1:33 PM Quoting Hu Yoshida’s Blog [...]