United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

A Top Priority for 2010

by Hu Yoshida on Jan 4, 2010

Happy NewYear and welcome to 2010! I wish you all a healthy and productive new year.

While the economy seems to be getting better, budget planners are still very cautious and so we will continue to see a drive to consolidate to reduce cost and thin down the fat to be more agile. Therefore data center virtualization will be a top priority for 2010.  We have already seen the adoption of server virtualization platforms with more competitive offerings and faster more powerful processors and networks becoming mainstream. The next major step in data center consolidation will be in the consolidation of storage through thin provisioning.

We have  seen capacity virtualization, or thin provisioning, introduced by most storage vendors in 2009. While there was initial concern about over provisioning and confusion about the benefits or drawbacks of chunk or page sizes, users who have tried thin provisioning have seen benefits in increased utilization, ease of provisioning, and wide stripe performance. However, a major inhibitor has been the need to rip and replace existing storage since thin provisioning requires architectural changes to manage the meta data associated with virtual pools of capacity.

However, vendors like Hitachi Data Systems, who can combine thin provisioning with storage virtualization, can provide thin provisioning for existing storage without the need to rip and replace. Hitachi Data Systems can reclaim 40% or more of existing storage capacity on external storage systems, just by attaching them to the USP V, discovering their LUNs and moving them into a Dynamic Provisioning pool which can reside internal to the USP V or on external storage systems.

While thin provisioning has many benefits, the management of smaller chunks or pages of data will place a higher demand on the cache and processing power of the storage systems which may impact performance especially if it is combined with other storage functions like copies, moves, and replication.

On top of the increased workload for thin provisioning, the virtualization of servers will increase I/O processing load dramatically. Instead of one operating system per processor we now may have 5 to 10 operating systems per virtual server platform. Since each Operating systems is firing off I/O’s to what are essentially file shares in a virtual machine file system, the I/O from this VM file system will tend to be very random, which places an even higher workload on the cache and disks of a storage system.  While the wide striping of thin provisioning will help to distribute the I/O across more disk arms, there will be more processing of meta data to support this.

The USP V eliminates most of the overhead of meta data management by using pages instead of chunks and chunklets and managing meta data in a separate cache accessed by separate busses than the data cache.

Unlike other thin provisioning storage products the USP V is an enterprise storage system that can scale up as well as scale out. The USP V can scale up to support the increasing load from a VM file system by adding additional processors through alternate paths which can work on the same image of the file system in the USP V’s large global cache. It can also dynamically add to the global cache and increase the Dynamic Storage pool if more spindles are required.  The USP V can start with 32 processors and scale out to 128 processors. It can also loosely couple to another USP V or to the next generation USP V for non disruptive tech refresh. Capacity can be scaled out to peta bytes by adding additional external storage systems virtualized behind the USP V.

The USP V has the ability to scale up and out to support server virtualization and thin provisioning. With storage virtualization it can enhance existing storage assets with these new services for data center consolidation. It also designed to couple to the next generation USP V without application down time.  This type of flexibility makes the USP V  ideal for support of data center virtualization now and for the future.

Related Posts Plugin for WordPress, Blogger...

Comments (2 )

Vinod Subramaniam on 18 Jan 2010 at 8:21 am

Hu

2009 was the year when server and storage virtualization adoption increased worldwide.

However IT management is now looking closely at the economic effectiveness of virtualization. There seem to be no tools that give you metrics on the effectiveness of virtualization.

I think Server and Storage vendors do need to provide tools that measure the effectiveness of virtualization in a easy to understand fashion.

One way of doing this would be to use headroom charts as outlined by Neil Gunther at http://www.perfdynamics.com

For a HDS Storage Array taking a very simplistic example.
At 10% Write Pending the I/O response for writes is probably < 2 ms. As the WP increases to 60% this is going to bump up to around 10-15 ms. So available headroom when WP is 20% is 33%.
This is a simple M/M/1 queue.

Neil Gunther’s models use Queueing Theory to calculate available headroom in a multi queue scenario.

It would be nice to see available headroom trend that can be plotted on a single graph for all HDS assets owned by a company.

Hope this makes sense
Vinod

Sim on 08 Feb 2010 at 2:58 am

Does the loose coupling capability to the next-gen USP V also apply to the USP VM?

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site