United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Dynamic or Thin Provisioning

by Hu Yoshida on May 20, 2007

Since Hitachi announced the new USP V with Dynamic Provisioning last Monday, there has been a lot of buzz about the dynamic or “thin provisioning” capabilities of this platform. Popular blogs like RupturedMonkey have had some interesting discussions about this, and other vendors have been rushing to publicizing their thin provisioning capabilities, or announcing their intent to do thin provisioning.

All the major analysts, including, Gartner, ESG, OVUM, IT Centrix, Evaluator Group, IDC, and Illuminata have published reports about thin provisioning or have been quoted as identifying this as a required tool for managing storage capacity. Most would credit 3ParData as coining the term “thin Provisioning”. Other vendors who provide thin provisioning include NetApp, DataCore, and Compellent.  Many vendors offer thin provisioning in their NAS systems, including Hitachi Data Systems with HNAS and EMC with Celerra.

What differentiates The Hitachi USP V Dynamic Provisioning from Thin Provisioning, is that this is the first implementation which combines capacity virtualization with volume virtualization on an enterprise class platform. 

Unlike 3ParData and other implementations which are based on clusters of active/passive dual controller nodes with separate caches, limited port connectivity, and static back plane architectures, the Hitachi USP V platform is an enterprise class,  high availability platform with dynamic global cache, switched architecture for cache and disk access, and hundreds of thousands of virtual ports. The USP V provides thin provisioning with enterprise class availability, performance, and scalability.

The USP V Volume virtualization provides virtualization of volumes across heterogeneous storage systems which enables them to dynamically change configurations, move volumes across heterogeneous tiers of storage, migrate volumes for technology refreshes or lease expirations, and replicate volumes for business continuance. Volume virtualization provides thin provisioning with data mobility across heterogeneous systems.

We have announced Dynamic Provisioning initially for internal storage on the USP V. As we gain experience with this technology we hope to provide it as a service to other storage systems that are externally attached.

Dynamic Provisioning is not a panacea for all our storage woes. There are applications that do a hard format or write across the volume when they do an allocation and that would negate the value of thin provisioning. However, Dynamic Provisioning could still be useful from a performance perspective, because it would provide wide striping. Wide striping comes from the allocation of chunks across all the drives in a storage pool which may be a hundred drives or more. Spreading an I/O across that many more physical drives would greatly magnify drive performance and eliminate the administrative requirement to tune the placement of volumes across spindles. Dynamic Provisioning can be viewed as two products. One for thin provisioning and another for wide striping of I/O, and each would have merit in and of itself.

Over time, I expect that most storage vendors will be able to provide thin provisioning in their storage controllers. The question will be how will network based virtualization systems like SVC and InVista provide thin Provisioning? How will they be able to combine volume virtualization with capacity virtualization?  

Related Posts Plugin for WordPress, Blogger...

Comments (2 )

Barry Whyte on 21 Jul 2007 at 3:09 pm

Hi Hu,

Maybe I’m too close to the source but it would seem like a rhetorical question you ask…

SVC and any other in-band virtualization appliance is in in the PERFECT place to provide thin provisioning. All data flows through the device, so its purely a matter of updating the virtualization mapping information as data is written to a thin-provisioned device. Infact its probably the simplest place to implement thin provisioning.

Han Solo on 18 Aug 2007 at 6:33 pm

>has to be done in the controller

Well, frankly this is really the heart of the issue in storage virtualization.

I started out being one of those hard core “do it in the network” people, who thought everything should be moved to the switches.

On a white board or at the 10,000 ft typical industry analyst level that sounds great and makes perfect sense.

However, the reality is that you can only do VERY basic things in the san switch layer such as simple synchronous copies/mirrors.

Actually trying to do something interesting such as thin-provisioning, TRUE copy-on-write point-in-time snapshots (not just full 1-1 copies), async mirroring, actual RAID levels not just mirroring, etc…requires keeping track of the DATA STATE OVER TIME.

That is something the switch later can’t do, and why in the end it turns out that SVC and Falconstor etc have the right idea and the whole split/path out-of-band stuff is limited, and not that great.

The limitations of ‘in the network’ virtualization are so great that the added microseconds of latency added by the appliances used by IBM’s SVC, and Falconstor are minimal compared to the ENORMOUS amount of power that the appliances approach gives you in terms of actual useful, creative and innovative uses for storage virtualization.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us


Switch to our mobile site