United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Thin versus Thick Virtualization

by Hu Yoshida on Aug 18, 2006

Definitions for storage virtualization have been confusing at best. Remember Gartner’s Symmetrical and Asymmetrical virtualization, then in band and out of band virtualization? I believe virtualization should be categorized by where in the storage stack it resides. Hitachi Storage virtualization resides in the Tagmastore Storage Controller. Unfortunately we have added to the confusion by naming the modular version of the Tagmastore Storage Controller, the NSC, or Network Storage Controller. In reality its virtualization has nothing to do with the network. It does supports FC and IP network attachment, but it also supports direct attach and ESCON/FICON mainframe attach.

I do like Dave Hitz’s introduction of the concepts of thin and thick virtualization in his post entitled “Avoiding Vendor Lockin with Storage Virtualization“, since it adds the dimension of storage capability or functionality.

He defines thin virtualization as products that provide niche capabilities like migration or global name space, but has limited virtualization capabilities which does not support extended storage functions like snap shots, cloning, or replication. Thin virtualization may add a few attractive features but it won’t make storage from different vendors look the same for commonly required storage functions.

Thick virtualization provides the full set of storage functions. Under this definition he includes the Hitachi Tagmastore USP/NSC and the NetApp V series using OnTap. Dave points out that while thick virtualization makes all the storage look the same, it does cause lockin to the Thick Virtualization vendor since all the storage functions would be implemented through that vendors management interfaces. I would argue that Thin virtulization vendors create even more of a vendor lockin since the mapping tables between physical and virtual extents are locked up in a mapping table controlled by the thin virtualization appliance.

If the business value of virtualization is to provide a common way to manage storage function across heterogeneous storage systems then there will be a lockin to that common way, whatever it is. In the Hitachi approach we provide the next best approach to avoid Vendor lockin. We allow externally attached storage to go back to native mode. since we do not require the reformatting of the externally attached storage. Storage can be disconnected (unlocked) and reused in native mode if you don’t like the storage functions that our controller provides.

The storage functionality that we provide is unique to our storage controller architecture. There are no functions like virtual ports and host storage domains, dynamic global cache configurations, logical partitioning, and Universal Replication in any other product, so there are no standards for the management and interoperability of these functions.

I agree with Dave that standards and interoperability are hard to attain since customers also demand innovation to meet their changing storage requirements. It doesn’t mean that we give up on standards, but it will always lag behind. All the arguments about what customers need, Interoperability versus vendor lockin, thin versus thick virtualization, will ultimately be settled in the market place. If it doesn’t meet real customer needs, customers won’t buy it.

Related Posts Plugin for WordPress, Blogger...

Comments (2 )

Sara on 22 Aug 2006 at 6:33 pm

Yes, I agree with u very much. Only the needed things have a good market. Thanks for your professional article.

Mikko Flemmings on 31 Aug 2006 at 3:56 am

Thanks for your excellent article, another good proffesional insight on the storage bussiness industry.

I think that when it comes to virtualization, either thin or thick , most of the customers would like to have a unified managment interface to do provisioning. Right now you need to use each vendor tools to do the basic provisioning before you can present it to your virtualization layer.

When customers hear that they need to keep using thier old tools to do provisioning, they just see it as another complexity and managment overhead , and not the way they dreamt Virtaulaization should be done.

This is how NetApp’s Dave describe it in the article you where refering to from his blog:
“I don’t like vendor lock-in, but I also don’t like managing different storage systems. I want ‘storage virtualization’ to make everyone’s storage look the same. Then I can buy storage from multiple vendors, but still manage it easily.”

I think Dave is right regarding the need for standards when it comes to provisioning your hardware which is exactly what the SMI-S standard is good for.

So what customers really need is a “Whole Virtualization” ( Another tecno term) which means that on top of thick Virtualization , as TagmaStore USP do provide, you get everything managed from one managment application , provisioned from the same application on the underlying storage arrays and Virtualized using the thick Virtualizion the product provide.

This should be the next step beyond thick Virtualization which will provide a true virtalized storage to the end user and will make the underlaying arrays a true JBOD in managment term since all the managment will be done else where.

I think this is the market perception of true
Virtualization and when this will be implemented you will see how Virtualization will really take off.

Just my 0.02

Great blog , keep sharing us your thoughts

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Recent Videos

Switch to our mobile site