United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Is Virtualization over rated?

by Hu Yoshida on Apr 29, 2007

Last week Beth Pariseau, News Writer for SeachStorage.com, published an interview with Jay Kidd of NetApp, with the title: “NetApp VP says storage virtualization over rated.”

 

I read the article and while Jay did not come out and say those words, he did say that file virtualization was a niche market for lease end migration and most of the vendors in that space were not making money. While I agree with Jay that many of the vendors in this space have not been successful, I believe it is because of limitations in their virtualization solutions. If virtualization is implemented properly it should be able to provide non disruptive copies for concurrent backup, replication of consistency groups for business continuance, consolidation with safe multi-tenancy, partitioning for QoS,  audit logging for compliance, data mobility without reboot, tiered storage for life cycle management, in addition to device migration. Virtualization can provide increased scalability, increased availability, and increased performance while simplifying management and configuration complexity…     

 

While Jay says that the NetApp V series, is a direct competitor to the Hitachi Tagmastore for data center virtualization he does say” “Where we caution people is, if you move toward the level of having a single device, let’s say it’s virtualizing six arrays behind it, that device needs to have the performance and availability of the aggregate — it has to be six times better than each array behind it.

 

I agree. The control unit that virtualizes other storage behind it must be able to scale in performance, connectivity, availability, and bandwidth. The Hitachi Universal Storage Platform is made up of 128 processors, with a large global cache which is accessed over 4, high speed, cross bar switches. For connectivity, it has 192 fibre channel ports, each of which can be virtualized into 1024 virtual ports, with separate address spaces for safe multi-tenancy.

 

The NetApp V series is a cluster of two processors with separate caches, with 16 FC ports which are not virtualized. It is designed for volume pooling which does not solve the problem of networking storage to storage as I described in my previous blog post. If you apply the recommendation of “n” times the performance and availability of the aggregate storage arrays behind it, that doesn’t leave much room for virtualization.

 

The primary benefit of storage virtualization is the same as it is for server virtualization, consolidation. Storage consolidation can not be achieved without the ability to move data between storage arrays non disruptively. While SANs provide server consolidation to storage, they do not provide consolidation of storage. The addition of virtualization in the SAN will not achieve storage consolidation. Virtualization must be done in the control unit where the cache images of storage can be accessed and processed concurrently.  But if you do virtualization in a control unit, it must have orders of magnitude greater performance, connectivity, availability, and bandwidth, than the storage behind it.

 

If you have this than storage virtualization is not over rated. The best measure for the relevance of virtualization is in the market numbers. We have installed more than 4500 units of the Hitachi USP and it’s midrange version, the NSC.

Related Posts Plugin for WordPress, Blogger...

Comments (6 )

[...]  There is plenty of technical and operational benefits of storage virtualization, some of these links can take you to these sites. [...]

RPell on 23 Jul 2007 at 7:56 pm

It baffles me that one might need a virtualization engine to be more powerful than the aggregate of the engines it is virtualizing. In the phone company, for instance, one core PBX is not more capable than the aggregate of the edge switches. This same idea holds true with all “systems” (whether it is banking, plumbing, telephone switching, highways, etc) that must adhere to some degree of over-subscription. NetApp’s V-series, somewhat similar to the IBM SVC and EMC Invista, does not have as much capability as the aggregate of the sibling connected devices because the idea of virtualization is to take advantage of a larger pool of resources that are generally under-utilized. In ANY environment where this architecture is deployed it is a situation whereby resources (such as storage) are not be leveraged to the best of their ability. The argument is clear when you take a look at any generally deployed storage array, such as your Tagmastore USP with 128 processors. How many of those processors are being utilized over 50%? How many are over 50% for 10% of the day? 20% of the day? 80% of the day? The answer is – not much. Customers are not using the bandwidth of the SAN and the monolithic storage arrays that they have today, which is why virtualization engines are being chosen by of cost-conscious, datacenter managers for allowing greater ACCESS to MORE of “the Company” data. The Web has demanded greater access to that data – not necessarily heavy, streaming, screaming applications, but lightweight access patterns with lightweight protocols – HTTP, FTP, DAFS, SSH, etc. The most unfortunate part of HDS’s story is – they have not gotten that part straight yet. HDS continues the love affair with FCP and FICON mediums. Data is captive in these ginormous (love that new word :-) arrays just dying to get out and be loved by others. Therein lies the desire for freedom…a tunnel to get the data out to others…a virtualization engine that provides more than just fibre channel or mainframe connections. And you know what, HDS DID figure this out already – hence the OEM partnership with BlueArc (in fact, exactly the same concept as the NetApp V-series).

So lets not spoil it for NetApp, they’ve got to be doing something right with 35% yr-on-yr growth and NO monolithic storage offering today or anytime in the future.

With greatest respect,
Rpell

Alanat Coop News » Thoughts on File Virtualization on 19 Nov 2007 at 5:22 am

[...] Thoughts on File Virtualization May 11th, 2007 File virtualization is getting a little attention in the blogosphere. Yesterdayhttp://blogs.hds.com/hu/2007/04/is_virtualization_over_rated.html, repeating an observation that Jay Kidd from Network Appliance made inhttp://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1252978,00.htmlwhere he saidthat the performance and scalability characteristics of the Filing virtualizing system(s) need to be equal to the aggregate performance and scalability of the NAS/server systems that are “behind” it. I don’t agree completely with Hu and Jay because the performance of the virtualizing system(s) only has to be equivalent to the aggregate throughput and I/O demands of the NAS/server systems behind it. One of the hypotheticalbenefits of virtualization is the ability to load balance resources that, by themselves, are unevenly utilized. The performance requirements for file virtualization are driven by the clients, not the servers participating in the FAN (file area network). [...]

[...] File virtualization is getting a little attention in the blogosphere. Yesterdayhttp://blogs.hds.com/hu/2007/04/is_virtualization_over_rated.html, repeating an observation that Jay Kidd from Network Appliance made inhttp://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1252978,00.htmlwhere he saidthat the performance and scalability characteristics of the Filing virtualizing system(s) need to be equal to the aggregate performance and scalability of the NAS/server systems that are “behind” it. I don’t agree completely with Hu and Jay because the performance of the virtualizing system(s) only has to be equivalent to the aggregate throughput and I/O demands of the NAS/server systems behind it. One of the hypotheticalbenefits of virtualization is the ability to load balance resources that, by themselves, are unevenly utilized. The performance requirements for file virtualization are driven by the clients, not the servers participating in the FAN (file area network). Chuck Hollis from EMChttp://chucksblog.typepad.com/chucks_blog/2007/04/more_virtual_th.html.where he quoted this Mark Twain one-liner: “But don’t sacred cows make the best hamburgers?”, which preceded his argument that virtualization belongs in the network. It wasn’t completely clear to me, which sacred cows or hamburgers Chuck was referring to but I’m wondering if there is a bloggers bible getting passed around that says quoting famous writers and thinkers gives your argument more credibility. [...]

[...] File virtualization is getting a little attention in the blogosphere. Yesterdayhttp://blogs.hds.com/hu/2007/04/is_virtualization_over_rated.html, repeating an observation that Jay Kidd from Network Appliance made inhttp://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1252978,00.htmlwhere he saidthat the performance and scalability characteristics of the Filing virtualizing system(s) need to be equal to the aggregate performance and scalability of the NAS/server systems that are “behind” it. I don’t agree completely with Hu and Jay because the performance of the virtualizing system(s) only has to be equivalent to the aggregate throughput and I/O demands of the NAS/server systems behind it. One of the hypotheticalbenefits of virtualization is the ability to load balance resources that, by themselves, are unevenly utilized. The performance requirements for file virtualization are driven by the clients, not the servers participating in the FAN (file area network).Chuck Hollis from EMChttp://chucksblog.typepad.com/chucks_blog/2007/04/more_virtual_th.html.where he quoted this Mark Twain one-liner: “But don’t sacred cows make the best hamburgers?”, which preceded his argument that virtualization belongs in the network. It wasn’t completely clear to me, which sacred cows or hamburgers Chuck was referring to but I’m wondering if there is a bloggers bible getting passed around that says quoting famous writers and thinkers gives your argument more credibility. [...]

nas storage servers review on 04 Aug 2010 at 10:46 pm

certainly not overrated, it’s 2010 now and virtualization is getting hotter

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Recent Videos

Switch to our mobile site