How do you thin provision and who needs to know?
by Hu Yoshida on Jun 23, 2009
USP V users are finding it easy to free up 30% to 40 % of existing storage capacity with Hitachi Dynamic Provisioning. The best part is that they can convert their current “fat”, over allocated, volumes to thin provisioned volumes with, no down time to do migration, no rezoning of the SAN, and little or no impact to applications. While the applications are running, we can create a Hitachi Dynamic Provisioning (HDP) pool of storage by using spare capacity or by adding internal or external storage to the USP V/VM. Once the HDP pool is created, we can use the Hitachi Tiered Storage Manager software in the USP V/VM to move application volumes into this pool while the application is running. This movement is done in the background and can be throttled to maintain application performance during the move.
As the volume is moved into the pool it is moved in increments of 42MB pages and the pages are striped across all the disks in the pool. At the end of the move, the USP V/VM switches the internal paths to the new volume in the HDP pool. The pages are checked to see if they are all zeros. These zero pages are reclaimed and returned to the HDP pool for use by other applications. The old (fat) volume can be shredded (over written) to erase the old data and made available as free capacity. Depending on the configuration and location of the old volume, it may be added to the HDP pool to expand its capacity and wide striping performance.
For example, let’s say that we have a 400 GB volume which we migrate into a new 4000 GB HDP pool. After we move the volume into the pool, the microcode examines the pages that were assigned to the volume and finds that half the pages contain zeros. These zero pages are reclaimed and returned to the HDP pool, so now the volume has been thinned to 200 GB and the remaining free capacity of the HDP poll is 3800 GB. The old 400 GB volume can be shredded and added to the HDP pool, increasing its total capacity to 4400 GB and its available free capacity to 4200 GB. When this new capacity is added to the pool, the thin 200 GB volume is re-striped across this additional capacity to increase the effects of wide striping performance across these additional disks. In this way we can continue to free up more capacity and even increase performance until we run out of fat volumes.
Since this is all done within or behind the USP V/VM, the applications, the servers, and the SAN are not aware of the change from fat to thin. No changing of World Wide Names, no rezoning of the SAN, no exposure to configuration errors. The only difference that the applications and servers will see is an improvement in I/O response time and throughput since the thin volume is now wide striped across all the disks in the pool.
Most users experience between 30% to 40% reclamation of storage capacity and improvement in throughput by several hundred percent due to the automated wide striping of pages across all the disks in the pool. The HDP license is required, but it is free for the first 10TB. The pay back is almost immediate as you free up capacity in a matter of hours. However, some file systems are not thin provision friendly since they write metadata intermittently across the file space. What if you move a volume in to an HDP pool and you find that you have not reclaimed any capacity? You will still experience the performance benefits of wide striping and automating load balancing. But if you are still not happy, you can back it out by simply migrating the thin volume back to its old fat condition, without disruption, and hope that your application users do not notice the change in performance.
There are other storage systems that do thin provisioning but unless they also provide the ability to do virtualization they will not be able to reduce your existing fat volumes without disruption to your applications, servers, and SAN. What happens in the USP V/VM stays in the USP V/VM and they don’t need to know.
Comments (10 )
Our company has realized some amazing returns by using HDP. In addition to reclaiming unused storage with the thin provisioning features, the performance of HDP has also been significant.
In our storage environment we have virtualized our EMC DMX arrays behind our USP-V and have placed the virtualized volumes into an HDP pool which has increased utilization and performance of an externally virtualized array.
Until EMC and IBM-XIV can externally virtualize and pool storage from my other suppliers I see the benefits of the HDP technology as something only Hitachi can do.
I believe at the end of paragraph 1 there is a mistake
You said –
“..we can use the Universal Volume Manager software in the USP V/VM to move application volumes into this pool while the application is running.”
I think you intended to say –
“..we can use the Hitachi Tiered Storage Manager software in the USP V/VM to move application volumes into this pool while the application is running.
Thanks to customerstorageexpert for his endorsement. It actually works as advertized.
Nigel thanks for setting the record straight on HDP in your blog post http://blogs.rupturedmonkey.com/?p=461
Also for correcting my error in this post.The Universal Volume manager enables attachment of external Storage and the Hitachi Tiered Storage Manager is the data mover that moves application volumes across tiers of storage based on policy.
I have edited it to correct it as you pointed out.
[...] next WRT is sequential or random. I won’t rehash all the arguments here; you can read them all in Hu’s blog, Marc Foley’s, Nigel Poulton’s (replete with video; nice touch, Nigel!), not to mention Tony [...]
what about limitation on shared memory for activation dynamic provision ?
I heard that so far the levelling function for new volumes added to an existing HDP pool is not working with the current microcode of the USPV but is supposed to work in a release to come. It means that a newly created HDP pool is actually optimized but additional storage to the pool will not. Is that true or wrong?
We created external storage HDP pools. Will the luns installed on these external pools benefit of the same optimization as applied to internal volumes?
Vladimir, there is an impact on shared memory which is configuration dependent. Your HDS representative can give you reccomendations that are specific to your implementation. I will be addressing this in a future blog since this raises some points that highlight advantages for HDP in our USP V architecture.
George, the automatic leveling function which I refer to is available for HDP pools which are created with the new release of HDP v05. We are looking at providing this for pools that were created before v05, but that is not available today. Rebalancing for non v05 HDP pools is done by moving the HDP volumes into a larger pool.
With v05, external storage HDP pools will also be automatically rebalanced.
[...] This post was initiated by a comment from Vladimir Lavrentyev to my recent post on HDP. [...]