What Storage Virtualization can not sacrifice
by Hu Yoshida on Jun 26, 2010
There is an increasing interest in storage virtualization as seen an the increasing number of articles and blog posts on storage virtualization. In the last few days Rick Vanover posted a very balanced overview of storage virtualization for Datamation where he reviewed some of the many options. Carol Sliwa posted a Storage Pro Guide to block-based storage virtualization for SearchStorage which cited some use cases. One of the use cases was the City of Coquitlan (Canada) who is a 2010 Computerworld Honors Laureate award winners in IDG’s Computerworld Honors Program and a customer of Hitachi.
While there is a lot of information on what storage virtualization can do and the many ways to accomplish the same result, there are some things that virtualization should not do to achieve those results. I think it is time to mention a few of these.
First, storage virtualization should not add complexity. It should not require you to tear apart the SAN to insert another layer of management complexity. For that reason, Hitachi’s storage virtualization solution resides in the storage controller and not in the SAN.
Second, it should not violate security practices. The secure connection between the initiator (server) and the target (storage) should be preserved. Virtualization approaches that sit outside the storage controller must either crack Fibre Channel data packets or proxy the I/O request to see if they have a read or a write. A storage controller is the target to the initiator and can support FC-SP which provides a CHAP authentication between the initiator and the host.
Third, it should not degrade performance for your critical data. The virtualization process must be able to improve rather than degrade the performance of your data. That can only be done if the virtualization engine is more powerful than the storage system it virtualizes. It should also have its own internal high performance cache and disks for tiering of critical data from lower performance storage systems.
Fourth, it should not have less functionality than the storage it virtualizes. If your external storage has advanced functions like distance replication or thin provisioning, the virtualization engine must provide the same functionality without additional complexity or performance overhead.
Fifth, it should not expose the privacy of your data or impact your quality of service. The value of storage virtualization is the ability to consolidate many storage users onto a common set of storage resources, but at the same time the virtualization engine must be able to protect each user from the bad or excessive behavior of other users who share the same resources. Our USP V/VM can partition the data users as soon as they enter the storage port. The USP V/VM assigns each host to a separate host storage domain. Even though different hosts may be accessing the same storage port, they can be assigned their own LUN address space with its own priority setting which can be changed dynamically. We can also partition the cache dynamically to insure that a user does not dominate the cache at the expense of other users. These partitions ensure that there is no data leakage or escalation of management privileges.
So while different storage virtualization systems are available and may produce the same results, you also need to look at how they get their results. It may be at the sacrifice of simplicity, security, performance, functionality, or privacy.
[...] generation. Hitachi Data Systems provides this through the Hitachi Storage Command Portal. In previous posts, I had listed a requirement for the management of storage virtualization to be independent of application, server and network [...]