A loaded Question
by Hu Yoshida on Nov 1, 2008
Recently I was asked: “Why did Hitachi choose an in-band approach to SAN virtualization?”
This is a bit of a loaded question like: “When did you stop beating your wife?” This question implies that you are beating your wife. How do you answer without appearing to agree to the premise or appearing to be evasive? This is easy to answer if you aren’t married. Then you could just end the discussion by saying that you don’t have a wife.
In terms of the question I was asked, I don’t have to be dragged down into the debates between in band or out of band SAN virtualization, since Hitachi does not do SAN Virtualization.
Hitachi does storage virtualization. SAN virtualization is dependent on the SAN for connectivity and information for mapping physical extents from different storage systems to virtual extents. Hitachi storage virtualization is not dependent on the SAN.
Storage virtualization is enabled by software in the Universal Storage Platform which presents LUNs from externally attached storage systems through its enterprise class storage control unit to servers that can be attached through SAN, DAS, NAS, ESCON, FICON. There is no need to remap the LUNs and no need for a mapping table.
If FCoE or other protocols should happen to replace the SAN in the future, it would not have an impact on storage virtualization, where as SAN virtualization would have to make some major changes to switch from FC to Ethernet. For Hitachi it would only mean the replacement of the front end ports to FCoE or Consolidated Network Adapters.
Unfortunately SAN virtualization is often confused with storage virtualization, and the limitations of SAN virtualization like complexity, performance degradation, interoperability, and scale are applied to Hitachi’s storage virtualization.
With the USP there is no complexity of remapping LUNs. Instead of performance degradation, the USP provides a fast front end with a large global cache which improves performance for lower level storage. Interoperability problems are not an issue since the virtualization is not done in the SAN and the 128 processors with 256 GB of global cache, and tens of thousands of virtual host connections means that scalability is not a problem either.
So don’t confuse Storage virtualization with SAN virtualization and don’t be tricked by loaded questions which try to make them look the same.
Comments (7 )
Hu, I think you are miss-using a term. SAN virtualization is creating V-SANs, at a logical port level. Nothing to do with “storage virtualization” as such.
Barry, please read my post again. What the SVC and other SAN based virtualization appliances are doing is not storage virtualization. You do not virtualize existing storage LUNs. You require the storage to be reformatted into fixed extents or managed disks that are remapped in the SAN to virtual disk that are presented to the servers. You have virtualized the SAN to look like storage. If the SAN goes away, would you still have storage virtualization?
[...] While I certainly agree that storage virtualization can increase utilization of storage, it is important to differentiate storage virtualization from SAN virtualization, where the SAN is virtualized to look like storage. With SAN-based virtualization, you may not see a significant increase in utilization and you may see an increase in operational expenditures. [...]
I don’t work for IBM/EMC/NetApp, I’m an end-user and I’m afraid I have to agree that Barry is right with what is the common parlance for Storage Virtualisation. I’ve pulled Chuck Hollis up in the past with him to trying to redefine what Storage Virtualisation is.
Firstly, in SVC’s case revirtualising to use FCoE should be pretty simply for them; just pull the FC cards out and replace them with FCoE cards. Actually I suspect it would be technically possible for SVC to virtualise FC presented LUNS or extents and represent them as iSCSI Luns.
And as for this continued comment from HDS, you can reformat into fixed extents if you want. You don’t actually have to, IBM make a fair amount of services revenue in using the SVC as a pure migration device to go from one array to another. They spent alot of time ensure that you can go into SVC for migration and come back out again. You can maintain your pdisk-to-vdisk relationship if you wish. I’m sure Barry will correct me if my understanding is wrong.
Hello Martin. Good to hear from an end user. Unfortunately the “common parlance” for storage virtualization is misleading and I am trying to clarify the definition. There is a big difference between virtualization of existing storage which can be accessed by any storage protocol and virtualization of storage which is only limited to the FC SAN. I am open to any suggestions.
When you are sitting in the middle of a network, changing to FCoE may not be as simple as swapping our the cards. The SVC has a limited number of ports and requires a front end network to fan in the connection to the servers and a back end network to fan out to the storage systems. The front end acts as targets and the back end acts as initiators, and both networks have to handle congestion, QoS, and state change notifications.
I would like to know more about the migration. Can SVC migrate existing LUNs without disruption to the application and no impact to performance?
A quick look at the SVC documentation or a conversation with any SVC user will show you that SVC migrates LUNs without disruption to the application and no impact to performance.
The data from a migrating Virtual Disk is moved from one Managed Disk Group to another. The front end appears no different to the user. They read and write from the same LUN presented from the same WWPNs. All the changes happen on the backend.
This is one of the fundamental value propositions of the SVC and has been from the start. I’m not sure why there is an implication that it is not since this is a demonstrable fact and is being used by many, many SVC customers.
Umm, I don’t think any clarification is really needed. In fact, I think people trying to clarify is just plain obsfucation. SVC, USP, StorageAge, vSeries etc are actually very similar products; arguably Invista and Incipient are quite different.
NetApp vSeries allows me to access my storage via FC block, iSCSI and can do NAS as well, it is disruptive to do so granted. It is fairly hard to find an array which doesn’t support iSCSI and FC transparently. I suspect SVC probably could but there’s not been the demand to turn it into an iSCSI/FC gateway.
SVC can migrate existing LUNs with minimal disruption, you do have to take a small outage. I would suggest that you read Barry Whyte’s blog as obviously he is the expert. But I would suggest that perhaps you tell your sales-guys to stop propogating the myth that to use SVC, you must re-layout your storage etc. That has never been the case and it plain destroys credibility when sitting in front of an end-user who might actually have bothered to do their research.
And the window to natively support such things as FCoE is probably quite long; we are going to using top-of-rack solutions for some time and not going end-to-end FCoE. It is a cool thing to have your storage controllers supporting FCoE but not especially useful at the moment. But I don’t see any huge problems for IBM in SVC supporting FCoE and it does appear that they can keep scaling wide in the cluster if required.
So instead of attempting to redefine what Storage Virtualisation is, perhaps just concentrate on making your products better and better. Let the end-users define and decide how we categorise them?