United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

HDP Is More Than Thin Provisioning

by Hu Yoshida on Jul 6, 2009

This year I have been seeing more and more of our USP V customers turning on their Hitachi Dynamic Provisioning license and enjoying the benefits of thin provisioning. One web B2B was able to half their existing storage down to 4 frames while increasing their usable TB from 133 to 236 TB. While the amount of capacity that most users can recover depends on the OS and the file systems, in the majority of cases we are hearing about 30% to 40% savings or reclamation of capacity.

But even where the OS or File System is not thin provisioning friendly,  customers are using HDP for its many other benefits.

Recently I visited a large financial customer in Switzerland, where the storage administrators have been critical about the usability of storage management software in the past. The biggest advantage they saw in HDP was the ease of provisioning storage. Instead of taking hours to carve LUNs out of RAID groups, they could provision storage in a matter of minutes out of a virtual pool of preformatted pages. They made the surprising statement that HDP made provisioning so easy that it nearly eliminated their jobs!

Claus Mikkelsen visited a financial customer in New York where they were excited about the performance improvements which come from the wide striping of pages across the width of the HDP pool. They saw a 10 – 15 min batch job run in less then a minute. Claus points out that a major task of provisioning, especially for data bases, is to provision for performance Since HDP stripes across all the disks in the HDP pool, and now with v05, automatically rebalances the stripe when new pages are added to the pool, performance tuning by manually balancing spindle usage is a thing of the past.

Another customer I talked to was a telco who provided storage as a service to their internal customers. Since their customers paid for the storage; IT did not over commit the storage pool. If a user paid for 10 TB, they set aside 10 TB in the HDP pool. However, when they provisioned storage for the user they used HDP to provision the pages that were actually used. This not only gave them the performance benefits of wide striping, but also gave them the ability to copy, move, replicate, and tier only the pages that were actually used, and not the whole over-allocated volumes. This resulted in lower operating costs. If a customer happened to under estimate the amount of storage they needed, IT could find enough virtual capacity to meet their new requirements in a matter of minutes instead of the days that it would normally take.  They could also charge them for that service which would be outside the normal SLA.

So while many customers think of HDP as thin provisioning which only applies to certain OS and file systems, many users are experiencing the dynamic benefits of HDP which are independent of OS and file systems.

Related Posts Plugin for WordPress, Blogger...

Comments (5 )

Michael Hay on 06 Jul 2009 at 11:36 pm

Great post Hu on the many goodnesses of HDP. You bring out a lot of good points about tangential benefits of HDP such as performance, provisioning usability and generally using only what you need. All of these save our customers real money by either buying less disk, spending less time or getting more performance. Thanks mate!

Ripunjaya on 16 Sep 2011 at 3:12 am

Hi Hu,
I have a small doubt about thin provisioning ,
In case of the thin provisioning data would be written randomly as it would request the pages from the pool as and wehn required, however in case of the linear writing would it impact the performance.

I have couple of scenarion where VMware asked for thick disk only.
Thanks
Ripun
CSC

Hu Yoshida on 21 Sep 2011 at 8:03 am

Ripun,

When a VMware Admin provisions a new VMDK from within vCenter they have 3 x formats to choose : thin (only blocks used within the VM itself), zeroedthick (where some blocks are zeroed on first access) or eagerzeroedthick (where all the blocks are zeroed on first access). Both the thin & zeroed thick formats are making the VMDK thin inside of ESX; eagerzeroedthick however uses all the space that it is assigned. So what we recommend to take advantage of HDP and leverage HW thin provisoning is use eagerzeroedthick format with VAAI enabled so that HDP sees only the blocks used – Then ESX has a VMDK with full allocation of blocks, when it is actually thin in HDP

Our solutions team have written a white paper that explains this in detail for the AMS2000 Family http://bit.ly/hwaDIp which also apples to the VSP.

Ripun on 24 Sep 2011 at 11:32 am

Hi Hu,

Thanks for your answer.
However I am asking generic case of any OS. (VMware I was giving example).

In simple Windows and Unix if I use thin disk from a virtual pool, my linear writes would be written randomly as this would ask pages from virtual pool as and when write will occur, as virtual pool is shared between multiple File System it would be random write.
Which may effect performance at the time of read.

Thanks
Ripun
CSC

Hu Yoshida on 10 Oct 2011 at 11:56 am

Ripun,

The writes will be processed in cache and therefore have no impact from any more or less randomization due to pool technologies. Sequential writes can happen under many scenarios but would not necessarily imply sequential reads. Reads are improved across a pool due to the higher order of sustainable activity achieved by having more disk spindles deployed and reducing the constraints caused by hot spots. In the case that there are large sequential reads the HDP technology would be able to service them in a sequential manner using the 42MB page granularity which should not disrupt service time.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site