United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Where will storage virtualization take us?

by Hu Yoshida on Jan 9, 2009

Stephen Foskett, aka Pack Rat, posed this question in his blog last month.  “Where will virtualization of the data center infrastructure take us?”   http://blog.fosketts.net/2008/12/14/virtualization-data-center-infrastructure/

He points out that the “Implementation of virtualization technology to date has merely delivered condensation of physical resources: 250 physical servers are condensed onto 20 physical servers, but 250 virtual server images remain.” While this has led to “moderate” cost savings due to reduced rack space, power, and cooling, this is not an example of real consolidation or business transformation.

When it comes to storage virtualization he acknowledges there is some cost avoidance that comes from more efficient use of storage capacity, quicker provisioning, enhanced migration, and heterogeneous replication. However, he does not see real cost savings in the face of constant data growth.

I agree with Stephen if the virtualization he is referencing is only limited to consolidation and enhanced utilization. Some analysts like IDC and Gartner have called this virtualization 1.0 as opposed to virtualization 2.0.

The next level of virtualization, Virtulization 2.0, goes beyond consolidation and is able to transform the data center. It fulfills the SNIA definition for storage virtualization which it defines as “The application of virtualization to storage services or devices for the purposes of aggregating, hiding complexity, or adding new capabilities to lower level storage resources.”

Control unit based virtualization provides virtualization 2.0, because it can aggregate storage services within an enterprise control unit with SAN or network attached storage services like VTL and virtual server file systems and apply them to the storage that it virtualizes. It hides the complexity of external storage and can add new capabilities to lower level storage systems. This type of storage virtualization is not limited to SAN attached servers but it can also support mainframes, and is adaptable to future connectivity protocols by simply changing the front end ports. Unlike SAN based storage virtualization which is run in appliances, control unit based storage virtualization has the performance and functionality of a reliable, scalable, enterprise class, multiprocessor, control unit.

So what is transformational about this approach? It is able to provide one storage platform for all data and all servers over standard protocols. It can eliminate most of the need for expensive, enterprise class storage capacity while not sacrificing enterprise functionality and performance.  It can also enhance midrange storage with enterprise capability without ripping and replacing all the midrange storage. This approach to virtualization commoditizes enterprise and midrange storage without sacrificing enterprise functionality, availability, and performance. This is a transformational change on the order of “minicomputers and the spread of open systems” which is an example that Stephen gives in his post.

So, where will storage virtualization 2.0 takes us? Future data centers will have 80% or more of their data on commodity modular storage systems behind an enterprise class virtualization storage controller. Virtualization 2.0 will provide common enterprise services, for all data, to manage, protect, condense, archive, retrieve, and eliminate data when it is no longer needed.

Related Posts Plugin for WordPress, Blogger...

Comments (3 )

Stephen Foskett on 09 Jan 2009 at 2:03 pm

Thank you for the thoughtful reply, Hu!

Although I was referring to virtualization of all of IT infrastructure, I certainly believe it will impact storage in a major way. Many have argued that VMware, for example, makes the brand value of Dell, HP, and IBM redundant, since it hides the uniqueness and value add in a universal platform.

The same could be said of pervasive storage virtualization. If the only value added at the true “storing bits” layer is performance and stability, the balance of power shifts dramatically from the array to the virtualizer. And if that virtualization solution is really cross-platform (including mainframe) and adds solid advanced features (host integration, replication, etc) then the owner of that platform has a real asset on their hands. But more importantly, they cut the legs off of anyone offering non-virtualized arrays…

Sounds like a business plan!

Stephen

Hu Yoshida on 12 Jan 2009 at 2:20 pm

Hello Steve thanks for the comment and for your interesting blog at http://blog.fosketts.net/

[...] Storagebod is an Enterprise end user – I have discussions with him via the blogosphere and twitter.  He is in the trenches and has hands-on knowledge of the demands of high end storage.  He recently responded in his blog – Virtualisation 2.Oh Give It A Rest - to Hu Yoshida’s blog – “Where will storage virtualization take us?”  Since I find it hard to resist inserting myself into this kind of dialog – here is my take. [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site