What is the difference between thin provisioning and dynamic provisioning
by Hu Yoshida on Feb 24, 2009
When a new idea or technology is introduced we look for ways to help people understand what it does by relating it to an existing capability. The danger in doing this is that the understanding of the new technology gets limited to the perceptions of the older, existing, technology.
This seems to be happening with Hitachi Dynamic Provisioning, HDP. This is the ability to virtualize a pool of storage capacity. In order to help people understand this we often refer to this as thin provisioning, the ability to satisfy an allocation request with virtual capacity and only provision real capacity as it is used. This addresses the waste of allocated unused space. Thin provisioning has been available for some time in stand alone storage systems.
While thin provisioning is a benefit that can be provided by Dynamic Provisioning, it is only one of the many benefits. Hitachi named this feature, Hitachi Dynamic Provisioning, to focus on the capability to dynamically provision servers with preformatted capacity in a matter of minutes rather than hours. Dynamic provisioning enables thin copies, thin moves, thin tiering, thin replication, and thin migration of volumes since we know how many pages are actually being used in a volume. This reduces the operational cost of copying and moving data. We can dynamically reduce a fat volume simply by moving it into an HDP pool. Dynamic provisioning also provides for increased performance through the use of striping across many spindle arms. With storage virtualization HDP can be extended to externally attached storage. It does not require a rip and replace for existing storage systems to enjoy the benefits of dynamic provisioning.
One of the inhibitors to the acceptance of thin provisioning is the danger of users hitting peak demands where the need for storage exceeds the physical capacity of the pool. Although there are soft and hard, high water marks that alert when additional capacity needs to be added, a sudden surge in demand could occur before this capacity can be added. Although this could be a real concern for thin provisioning, this should not inhibit users from enjoying the other benefits of dynamic provisioning.
One of our customers is planning to use dynamic provisioning in their service provider business without implementing thin provisioning. Since they charge users for capacity, they physically allocate the total allocation that was requested into the dynamic provisioning pool. By using thin provisioned volumes, they can do thin moves and thin copies to reduce their operations cost. They can dynamically provision new servers out of this pool in minutes and enjoy the performance increase which comes from striping I/Os across all the spindles in the pool. If a user does exceed their allocated capacity, they can instantly provide more storage temporarily until they can add more storage to the pool. Of course they will charge extra for that service.
So thin provisioning is only one of the things that are available with dynamic provisioning. Since disk capacity is relatively low cost compared to the operational costs of provisioning, copying, moving, migrating, and expanding LUNs, it may be the least important feature of dynamic provisioning although it is the feature that most people talk about.
Comments (5 )
I appreciate that HDS provides thin migration – this is a nice element to point out!
One question I have is how, exactly, does the array handle a “fat” to thin move? You say “We can dynamically reduce a fat volume simply by moving it into an HDP pool.” How does the array know if a chunk or page or block or whatever is unused and does not need to be allocated? I can understand that an already-thin volume would “know” which were requested and which were not, but how would it do this with an existing “fat” volume. Is it looking for zeros?
And which HDS arrays is this feature available on? How many read/write thin snapshots can you have of a LUN?
Enjoy reading your blog! As an HDS partner, I thought you may be interested in taking a look at mine too.
Mr. Foskett raises some very important issues in his recent post. Migrating from “thick” to “thin” is not always as clean as IT managers imagine when they begin the project.
Most data migration tools that are not “thin-aware” will do an incomplete job of the migration. Such tools will move the data from a thick volume to a thin volume, but will move all blocks contained in the original volume. Thus, when the data movement is complete, the supposedly thin volume is as thick as the original. To create the thin volume, storage administrators must reclaim the unused space. This may be a manual process and may require the application to be halted during volume resizing.
Some data migration tools are smart enough to recognize continguous spans of all-zero data as large as a data chunk and avoid migrating them to the thin volume. This works well, but only if the data chunk contains only zeros. Any non-zero data, such as that left over after a logical deletion but without physical erasing, will cause the entire chunk to be migrated. Thus, such solutions are an improvement but partial in nature.
Symantec has worked with Hitachi to help automate the thick-to-thin migration. Symantec’s SmartMove, a utility within Storage Foundation, will migrate only data that is in use from a thick volume to a thin volume. This capability is by virtue of Storage Foundation’s ability to see the entire server-to-block data stack and determine which blocks are actually allocated to an application. Using SmartMove, the thick-to-thin process is entirely automatic with no administrator intervention to reclaim space and without any down time. A HDS/Symantec white paper on the topic may be found at http://eval.symantec.com/mktginfo/enterprise/white_papers/b-whitepaper_sf_and_hdp_06-2008.en-us.pdf.
The way to move Fat to Thin can be done online without downtime, if you have some space in each DKU to add some disks.
You can then use the V1 volume migration feature or Tiered Storage Manager to migrate data.
Please note that V1 migration has never been made popular among HDS storage administrators. I have used the feature to migrate Petabytes of data online (no downtime). HDS needs to author some cool tips etc to make such excellent features visible to users.