United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Hitachi Dynamic Provisioning without Thin Provisioning

by Hu Yoshida on Nov 19, 2008

Hitachi Dynamic Provisioning, HDP, is a feature of the USP V/VM which does virtualization of storage capacity in a pool of storage consisting of multiple RAID array groups. This pool of physical RAID array groups is divided up into 42 MB pages and used on-demand by any of the virtual host volumes using the pool. This enables HDP to provide a number of services.

The first benefit of HDP that everyone thinks of is thin provisioning. Thin Provisioning enables the USP V/VM to fill a request for a LUN allocation, for example 100  GB, with virtual capacity, and thinly provision the LUN with 42 MB pages of real storage capacity as the application starts to write to it. The unused capacity for the 100GB LUN allocation  is then available to be used to thinly provision other LUN allocation requests from the same HDP pool. This helps to increase the utilization of storage by eliminating most of the allocated but unused portion of a LUN.

However, the real reason Hitachi calls this feature Dynamic Provisioning is that it does just that. It enables IT to dynamically provision LUNs from this HDP pool by assigning virtual LUN capacity as quickly as drag and drop. Instead of taking hours to provision a new server with LUNs, HDP can dynamically provision a new server in a matter of minutes.
Another service is thin moves and copies. Since HDP know how many pages are assigned to a LUN allocation when it does a move or copy of that LUN, it only moves or copies the actual used pages and not the whole LUN. This reduces operational expense by eliminating the moving or copying of unused capacity.

Since we wide stripe the LUN by using multiple 42 MB pages which in turn use disk spindles accross ths whole HDP pool, we can increase throughput and decrease response time by employing more disk spindles in the performance of the I/O requests to all the virtual volumes. So wide stripe performance is another benefit.

One large storage user has decided to use HDP but not use thin provisioning. He does not want to run the risk of over subscribing his storage under peak conditions, so he never allocates more capacity than he physically has in the HPD pool. So what is the benefit of this if he is not using thin provisioning?

1. He still gets the benefit of dynamic provisioning, the ability to provision new servers in a matter of minutes.
2. He also has the operational benefit of thin moves and copies. For instance, it takes less time to replicate 42 MB of used capacity than 100GB of allocated capacity.
3. And he still has the performance benefits of wide striping. This equates to eliminating storage performance hot-spots which in the past kept administrators needlessly busy moving application data around the array.

His view is that capacity is cheap and getting cheaper, but his operational expenses are going to continue to increase. So the benefit he sees in HDP is not in thin provisioning but in dynamic provisioning, thin copies and moves, and wide stripe performance.

Related Posts Plugin for WordPress, Blogger...

Comments (9 )

Sudhakar Mungamoori on 21 Nov 2008 at 11:51 am

We like the idea behind HDP and we’re using it in a small deployment in our multi-petabyte environment.

However, on the implementation side for Enterprise arrays in OLTP profile, have there been any discussions related to changes to queue depth settings in HDP LUN?

Secondly, what happens if we hit cache pinned data; We cannot do anything to the underlying raid groups (reformat, etc.) without destroying the entire HDP pool on top of it.

Barry Whyte on 21 Nov 2008 at 4:27 pm

So, you are caling disk striping and fully allocated TP disks, HDP. Fair enough.

Striping has been around for a long time, and yes we do said striping in SVC. So another benefit we share.

You could do the same with SVC, that is, make sure you have enough physically allocated space, as you virtually allocate.

Oh and you can thin provision it too, rather than chubby provision. (32K vs 42MB)

So how do you see the USP-VM ??? Is it a SAN or Storage Virtualizer?

Customer Storage Expert on 22 Nov 2008 at 10:06 pm

The wide striping concept is getting close to the concept of a tier 0.

When will the HDS tier 0 allow for SSD NAND technology that will allow for dynamic provisioning and/or thin provisioning with Virtualized Tier-2 or Tier-3 SATA drives?

ScottF on 25 Nov 2008 at 5:29 pm

Wide striping is nothing new, I’m surprised you even mention it. The SVC does wide striping, the Symmetrix can do wide striping (i.e. striped meta volumes) in the same manner. I think HDS has been behind other technologies in this respect… remember the inability to do striped LUSE volumes (only concatenated)? Same thing, really, just not virtualized to the extent we see nowadays.

Hu Yoshida on 26 Nov 2008 at 1:45 pm

Hello Sudharkar, I am forwarding your question to your local HDS support rep.
I do not know about Qdepth being an issue for internal pools. Your rep can explore that more with you.
You can rewrite a track in any array group now to correct a pinned track without reformatting the arrray group.
The engineers tell me that pinned track handling has been enhanced to cover this problem in HDP. The HDS engineer can use SVP to determine which DPVOL is using a specific (pinned) track. Typically the recovery method is to restore the impacted LDEV – so in this case the impacted DPVOL could be rewritten (recovered) to correct the Pinned track. As you noted, recovering a Pool Volume would mean a loss of all the DPVOLs pointed at the pool. So that outcome will not be the case for HDP because the one DPVOL (and not a pool LDEV) is rewritten.
For more detail contact your local rep.

Hu Yoshida on 26 Nov 2008 at 1:57 pm

Hello Barry,
Yes wide striping is not new.
The USP VM is storage virtualization since it is a USP V that is packaged and priced for the midrange market. It is packaged in a 19 in rack and contains only 32 processors instead of the 128 processors that are possible in a fully populated USP V. It is a target to application servers that are FC, DAS, SAN, ESCON, FICON attached. It is not dependent on the SAN for connectivity and does not have to act as a proxy to determine what the server wants to do with storage.

Hu Yoshida on 26 Nov 2008 at 2:07 pm

Hello Customer Storage Expert, I quess I have a different view of what tier 0 is. To me tier 0 is no storage at all, which means data that is stored in systems memory and not on external storage.
As far as SSD NAND is concerned, I can not comment on future products capabilities.
The USP V architecture is capable of creating Dynamic Provisioning Pools of different disk types that are supported in the USP V. It is not recommended since different disk types have different performance and different recovery characteristics.

Mark on 06 Dec 2008 at 7:09 am

Hu,
I love the USP, I love the quality, I respect Hitachi Limited.
I just struggle with the me too aspect of the solution from time to time. It seems that feature sets need to be retrofitted from time to time, a little bit of me too at work. I have been involved with servicing, supporting and selling the HDS products since 1998, and have had much success. I feel that HDP is another me too, retrofit. I struggle with the feature, and am not a big fan, and neither are my customers. Hitachi LTD should have left this feature set to those who are block level aware, and do a good job with it.

[...] on the HDS USP-V, ThP on the HP XP24000) is based around a basic allocation unit of 42MB.  HDS refer to this as a Page .  Essentially any time a host writes to a previously unallocated area of a LUN, the array [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site