2012: A Focus on Increasing Storage Utilization
by Hu Yoshida on Nov 14, 2011
This is the first of a series of posts on what I expect to see in 2012 and how we can respond to these trends.
While data growth continues to explode, the budget to increase storage capacity will be limited by the increasing uncertainties of the world’s economies. Added to this are the expected supply shortages due to the floods in Thailand. In order to overcome these difficulties, companies will be looking for ways to increase the utilization of their storage assets in the coming year.
The good news is there’s a lot of unused and over allocated capacity to be reclaimed, and utilization of storage assets can be increased from historic levels of 20-30% to 50-60% with recent new technologies like thin provisioning, dynamic tiering, deduplication, and active archive.
If you have already invested in these technologies and are using them in production you are ahead of the game.
If you did not invest in these technologies when you bought your storage two years ago, you don’t need to wait three more years while you finish capitalizing these assets. You can attach them behind a VSP which will enable all these new capabilities on your existing assets today—through storage virtualization. VSP will also enable you to extend the life of storage that has already been written off by using them as tier three in a dynamic pool of tiered storage. Since this is tier three storage, you can maintain them on a time and materials basis, rather than through an expensive maintenance contract.
Dedupe is another great solution for inactive data, like backup. It may not be so good for active data since it has to be re-duped when it is used. Thin provisioning is a better solution for active data.
Since the greatest explosion of data is in unstructured data, which is not updated, an active archive is a good way to eliminate snapshots, copies, and backup for this type of data. An active archive only needs to be replicated once to ensure recoverability. Since the data is stored in HCP with metadata that describes it along with policies that govern it, other applications can work directly with the data in the HCP without the need to copy it or recreate it, which saves capacity.
There are many ways to ingest data into HCP:
- Hitachi Data Protection Suite provides backup and dedupe as well as ingestion of data into HCP over http.
- Hitachi Data Ingestor is a clustered NAS/CIFS appliance that replicates files into HCP from a remote site over REST interface. Active files are kept locally and inactive files are stubbed out to HCP based on local capacity thresholds.
- HNAS can tier files and stub them out to HCP.
- A growing list of applications like Sharepoint and SAP Netweaver can ingest directly into HCP 5. Other third party software like Enterprise Vault can ingest applications like email into HCP using standard protocols like http and webDav.
Technologies are available today to help companies increase utilization to meet their 2012 data requirements without the need to buy a lot of new capacity or rip and replace their legacy storage. There are many customers who have seen a 20% to 40% reduction in storage costs through increasing the utilization of what they already have. Contact your Hitachi representative or Hitachi reseller to see how it can be done today.
For Hu’s other 2012 trends, visit this bit.ly bundle: http://bitly.com/vXGP2T
Searching for how 100TB of RAW disk turned into 15 of actual data with layers of thick provisioning, virtualization, and wasteful snapshots seems to be a common question I’m seeing from CIO’s. I’ve always been a fan of storage profiler, but anyone have any other favorite tools for hunting down all those layers of unused storage that DP pools will reclaim?