United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Overheads for Thin Provisioning

by Hu Yoshida on Jul 7, 2009

All thin provisioning implementations have overhead associated with tagging and mapping the chunks or pages that are used to provision a virtual volume.  Some also have additional overhead to handle the RAID protection that supports these pages or chunks. In this post I will explain what Hitachi does to address these overheads.

This post was initiated by a comment from Vladimir Lavrentyev to my recent post on HDP.

First, Hitachi Dynamic Provisioning creates an HDP pool that maps across multiple RAID groups. There may be any number of RAID Groups in an HDP pool. Since HDP volumes are striped across the width of the HDP pool, they are striped across multiple RAID groups and have greater protection from multiple disk failures. With an HDP volume you could have multiple disk failures and not lose data as long as you did not have more than 1 disk failing in a single RAID Group. With an HDP Pool built from RAID 6 RAID Groups you could have two disk failures per RAID Group. The RAID generation for a RAID Group is performed in the back-end RAID director hardware. Unlike other thin provisioning systems that have no externalized storage virtualization, the USP V can leverage either internal or external RAID director hardware. If the storage is external, the overhead for RAID is off-loaded to the external storage system. Thin provisioning systems that use small chunks have additional overhead in managing the placement of chunks across RAID groups.

In terms of the tagging and mapping of pages or chunks, most thin provisioning systems use the same processors, cache, and busses that are used for data flow. This causes contention which will impact performance. In the USP V/VM, these processes are separated to avoid contention. The mapping information is maintained in a separate control store from the data cache. This control store is a “global” control store in that it has direct access from all the processors in the USP V/VM, so that an update that is made to the map by one processors can be seen by all the other processors. This direct access is separate from the access path to the data cache. All HDP mapping tables are present in the control store at all times so that the USP V/VM does not have to wait to retrieve less frequently used mapping segments from storage. HDP also uses a 42 MB page size which reduces the need to access these tables as frequently as other systems that use smaller chunk sizes. In these ways the USP V/VM eliminates overhead and reduces contention with the use of HDP.

So while there is overhead in HDP, as there is in all thin provisioning systems, this overhead is minimized and architected to avoid contention with data access.

Related Posts Plugin for WordPress, Blogger...

Comments (2 )

[...] there’s Hu Yoshida’s post referring to the Overheads of Thin Provisioning.  In it, Hu makes a very interesting claim that [...]

[...] there’s Hu Yoshida’s post referring to the Overheads of Thin Provisioning.  In it, Hu makes a very interesting claim that [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site