United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Storage Virtualization or SAN Volume Controller

by Hu Yoshida on Nov 10, 2008

Recently there has been quite a bit of back and forth between Barry Whyte and myself. In an effort to clear up any misconceptions and more clearly state our differing positions, I’ve outlined a few points that should help our understanding.

As a recap, storage virtualization with the USP V can virtualize storage for direct attached servers and mainframes, as well as SAN attached servers. Based on Barry’s comments, there is no difference than what IBM does with a SAN Volume Controller.  I believe there is a big difference between the USP V and the SVC and other virtualization examples that he cites since they can only provide virtualization for SAN attached storage and servers.

Barry was good enough to give me some use cases in a comment to my last post. This helps to clarify our differences.

Here are my comments on Barry’s use cases:

1. The first use case was how we virtualize existing LUNs. Barry claims there is no difference. I claim there is a big difference.  While there is a disruption to the application, when we disconnect a storage system and reconnect it through the USP, we discover the LUNs and present it back to the application from the USP cache. There is no need to create an “image mode virtual disk” and do a 1-2-1 mapping in a mapping table which becomes a single point of failure. If you lose the mapping table, wouldn’t it be difficult to find your LUN unless you knew which LUNs were mapped 1-2-1 and which weren’t?

2. The second use case was around how we move existing LUNs. Barry states that with the SVC you “simply move the data” from one storage unit to another?  My question is how does an SVC do this since it is sitting out side of the storage units, I think you need to do the equivalent of reads and writes, check return status, and if you don’t stop the application during the move then you either have to block access to the data blocks that you are transferring or maintain a log for changes that occur during the data movement. And since you do not have a global cache, you have to do this all in the same SVC node memory to have consistency between the memory images of the “from” and “to LUNs in the SVC. Some one mentioned that you do this on the “back end”. Correct me if I am wrong but I think the “back end” is driven by the same memory and processor that drives the front end. I can not see how this is done without some impact to the application since this is all being done in the data path. Since SVC is doing this in a SAN, there probably needs to be some SAN zoning involved as well. I believe there is a SAN in front of the SVC for fan in and another SAN in back of the SVC for fan out to the storage units.

In the case of the USP, the cache images of the “from” LUN and the “to” LUN are in the same data cache, and just by changing pointers in the control store (which is separate from the data cache) the USP effects the movement of data from one LUN to the other. This is done in the background with separate processors than the ones that are supporting the application I/O. With 128 processors, and 512 GB of global cache (all processors can see the same consistent image of data in cache) and 24 GB of shared control store connected across 8 cross bar switches, there is plenty of processing power and cache bandwidth to move data without impact to the performance of the application.

3. Upgrades to the USP V can be done without disruption if it is a matter of updating microcode, or other components of the USP V. Currently upgrading one generation of USP to another generation will require the new generation to virtualize the old generation during the migration.

I am not redefining storage virtualization, as Barry claims. I am merely pointing out the fact that what he calls “storage virtualization” does not fit the SNIA definition — the industry standard. As you’ll note below, this definition has two parts:

The first part defines virtualization as “the act of abstracting, hiding, or isolating the internal function of a storage system or service from applications, compute servers or general network resources for the purpose of enabling application and network independent management of storage or data.”

The SVC, Invista, Datacore, StorAge, etc, do not fit this part of the definition since they do not provide “network independent management of storage or data”. From the looks of it, it seems you are dependent on the FC SAN network.

The second part then adds:
“The application of virtualization to storage services or devices for the purposes of aggregating functions or devices, hiding complexity, or adding new capabilities to lower level storage resources.”

The USP can aggregate storage services for mainframes, direct attach servers, and with gateways, for VTL, archive, and NAS, to add new capabilities to lower level storage resources. With its large global cache and high end multi-processors, it can enhance the performance of storage that it virtualizes. With 224 external ports, each of which can be virtualized into 1024 virtual ports, it can increase connectivity and multi-pathing. The USP can provide a host of services like snap copies, copy on write, synch and asynch replication, device migration, tiered storage, dynamic provisioning, thin provisioning, wide striping, encryption, shredding, logical partitioning for QoS, etc. With gateways it can add new capabilities like virtual tape, file virtualization, deduplication, indexing and search, and long term compliant archive to lower level storage systems.

In the case of the SVC, the external storage systems often have higher level performance and storage services that the SVC.

The IBM developers aptly named the SVC, a SAN Volume Controller, not a storage virtualization controller.

Related Posts Plugin for WordPress, Blogger...

Comments (12 )

Barry Whyte on 11 Nov 2008 at 4:52 am

Hi Hu,

Thanks for the update. A few points as always :

1. I think you are getting stuck here on details. So are you saying when you discover the LUN on the backend, you magically know that its a LUN that used to be visible to host X? No, you must have some management interaction to say this LUN needs to be presented to host X. That is all our “image mode” management piece does too. The mapping table is clustered, and so its the same mapping table across all nodes in the cluster, therefore no single point of failure. Again, I don’t see any difference from a conceptual user point of view, simply technical implementation differences. The user task of importing existing LUNs can be done on both systems.

2. So yes, we need to read and write the data from old to new storage devices. This is using our internal CPU units, and yes we have less than the USP. However at the end of the day, any migration of data requires a read and write of the data. Only once we have verified the data is in the new place do we update the mapping table to point to the new location. So again technical nitty gritty differences, but to the end user the feature is the same. We allow users to specifiy how much or how fast to migrate data, since ultimately its not the CPU or internal bandwidth that are an issue, is the reads and writes from and to disk that will limit performance. The disks are the eventual resting place of the data, not the cache.

3. So here you are agreeing with me, that technically the USP system needs an outage / remapping with each upgrade, where SVC does not.

As for the quote from SNIA, there are the two definitions as you have quoted – I don’t ever know of a dictionary definition where you have to comply with all the defined meanings of a word or term – the second one I’d say describes SVC, DataCore, etc etc perfectly.

While I don’t disagree with you, SVC requires a SAN to work, we don’t support direct attach nor mainframe (we have DS8000 for that), but other than Mainframe systems, do you have many direct atatch open system sales? Maybe because USP has a bundled internal switch you see this not a SAN…

You say the external storage systems offer higher levels of performance? The benchmark results at the SPC would generally disagree there.

As for the naming, as we were one of the first major vendors out with a Storage Virtualization product, the marketing team at the time were not sure how well the Virtualization term would take in the storage industry – hence the conservative name – hindsight would suggest using the V word would have been fine.

PS. Thanks for joining back in, its much more useful and interesting with a 2-way dialog :)

Chris M Evans on 12 Nov 2008 at 5:55 am

Hu/Barry

The SAN thing is confusing me – AFAIK, HDS recommend connecting the virtualised storage to the USP through a SAN rather than point-to-point. It allows diagnostic information to be collected (port errors, port utilisation etc) and to map many-to-one external array ports to USP ports.

So what’s the deal with saying SVC needs a SAN?

David Vellante on 12 Nov 2008 at 8:39 am

Hu…had to weigh in on this excellent discussion. I think the similarities are greater than the differences and both IBM and Hitachi can do a better job of breaking the decade long slog of LUN management, the never-ending search for contiguous free space and painful array migrations. Users should no longer put up with such productivity killing activities. Hitachi, IBM and others are proving this in the field and need to do a better job marketing these excellent capabilities.

Here’s my take:

http://wikibon.org/?c=wiki&m=v&title=IBM\%27s_SVC_and_Hitachi\%27s_USPV:_The_similarities_are_greater_than_the_differences

Thanks for reading – dave from wikibon.org

Hu Yoshida on 12 Nov 2008 at 8:06 pm

Thanks for all the comments, Barry, Chris, and Dave.

I did talk to a customer today who uses both the USP V and the SVC and he confirms that they both can do LUN management. The difference is in scale, performance, availability, and some enterprise functionality that is only available in the USP V, like Mainframe support and logical partitioning. Logical partitioning manages the QoS for applications which share the same storage resources like cache and spindle arms. The SAN Volume Controller is used for tier 2 or 3 storage in support of open systems applications, while the USP V is used for tier 1, 2, and 3 on open systems and mainframes. That is the main difference. The USP V can provide tier 1, and map tier 1 functionality across the external storage that attaches to it.

The SAN Volume Controller is basically the consolidation of volume managers that used to sit in the host servers. This is now a volume manager that sits in the SAN and manages vdisks which reside in mapping tables in the SAN Volume Controller. As Barry describes in his Nov 11 post, you need to create three SAN zones to implement the SVC, one for the SVC ports, another for the SVC and external disk controller, and a third for the host and SVC ports.

The USP does not need a SAN to connect external storage. There is a max of 224 ports on the USP. Each FC port can be virtualized to 1024 virtual ports, so customers may opt to buy fewer physical ports and use a SAN to fan in multiple external storage ports to virtual ports in a pair of physical ports. We have a customer who replaced SVC’s with a USP just to simplify his SAN.

Storage virtualization is much more than volume management or LUN migration. It enables the ability to aggregate services from gateways like VTL, archive, NAS, VMware, and other role based servers and apply them to the same pool of storage in such a way that they can leverage the enterprise block services in a storage virtualization controller, across lower level, lower cost, heterogeneous storage systems. For example you can create an archive storage pool, thin provision it, tier it to an external storage device, and vault a copy of it half way around the world using high performance, block level replication. This approach not only aggregates external services but truly commoditizes the external storage, since they inherit the enterprise functionality of the Storage virtualization controller.

There is no denying that there is a market for SAN Virtualization controllers. IBM has the numbers to prove it. HP just joined the SAN Virtualization ranks with a product called the HP StorageWorks SAN Virtualization Services Platform, SVSP, which they are OEM’ing from LSI.

I am only asking that you differentiate storage virtualization, which the USP does, from what the SAN Virtualization Controller and the SAN Virtualization Services Platform do. Reread, the SNIA definition for storage virtualization which talks about network independence and the ability to aggregate services to enhance what you virtualize.

Tony Asaro on 12 Nov 2008 at 11:29 pm

There is a BIG difference between the USP-V and the SVC. The USP-V is a storage system that provides storage virtualization as a capability and the SVC is an appliance. The USP-V provides value as a standalone storage system and many end users are using it just for that. The USP-V can provide storage virtualization to manage 3rd-party storage as an additional function. That doesn’t invalidate the SVC but architecturally and fundamentally these two solutions are very different.

Barry Whyte on 13 Nov 2008 at 12:10 pm

I guess we are never going to agree here, but “architecturally” I agree they are different. I’d disagree that SVC doesn’t do Tier1 – we have many many many customers using DS8000, USP (and its clones) and DMX behind SVC.

Again though, everything you quote in your comments above regarding the functionality – you can do with SVC – its much more than the volume manager you suggest. We have dynamic cache partitioning to ensure QoS, we tier storage and allow you to locally mirror, flashcopy, migrate, thin provision etc and provide enterprise level replication to the building next door, or half way round the world too.

What I’m saying is that “functionally” both approaches to Storage Virtualization can achieve the same thing, can provide the same rich feature set across multiple storage devices.

Yes USP can stand alone as a disk controller, but many people find buying the top end of our midrange storage products with SVC can provide as good, if not better than tier1 performance and QoS for a fraction of the cost of a traditional monolithic box.

Open Systems Storage Guy on 14 Nov 2008 at 9:00 am

Well, this tier 1 disagreement is probably because you both define the tiers differently. There is no absolute line where you can say “this is always tier x for everyone”- if a company considers their mainframe storage tier 1, and considers their open systems storage tier 2, then by that reasoning, SVC would not provide them tier 1 storage as it does not support ESCON or FICON.

Hu Yoshida on 14 Nov 2008 at 10:26 am

Not to beat a dead horse, but I don’t agree with everything you say.

Why put a tier 1 storage system behind an SVC node that does not have more than 8GB cache, and two front end ports. The SVC node becomes the lowest common denominator. I do not see how a small processor like that can add performance and equivalent functionality to a tier 1 control unit. Although the nodes are clustered for fail over, only one node does the processing at a time. The only time you might put it in front of a Tier 1 storage system is when you are migrating it, not for production use.

Hard to see how you can do cache partitioning when you are sitting outside of the external storage cache of a tier 1 storage controller. The only way that you can do cache partitioning is when they are accessing a modular storage system where the controller caches are separate any way, and I would not call that dynamic.

They are not equivalent in functionality. Even when you ignore the ability to support direct attach and mainframe servers ( non SAN servers) the USP V/VM can support cache partitioning, virtual ports, host storage domains for safe multitenancy, journal based replication for three data center replication, encryption of data at rest, etc. Things that only a storage control unit can do . Even if they could functionally do the same thing, there is a big difference in performance and scalability between a tier 1 storage controller with 512GB of global cache and 128 high performance processors and a one processor node with 8GB of cache. That is where the architecture really makes a difference.

Let’s compare the cost of virtualization. As you point out when you buy a USP V or VM you also get a tier 1 enterprise storage system. An SVC has to be clustered. Usually you sell 8 node clusters that are packaged in one frame, with additional switches for the front end to connect to servers and the back end to connect to storage. You don’t get any storage with an SVC. If you don’t want to buy additional storage, you can buy a diskless USP VM for about the same cost as your cluster of SVC nodes and FC switches and have more processing power, more usable cache, and more port connectivity .

I suppose we’ll continue to agree to disagree.

Customer Storage Expert on 16 Nov 2008 at 10:32 pm

I think IBM has to take the opportunity here to swallow hard and admit that their virtualization play at best stands on the long coat tails that Hitachi provides.

As a cusomter I’ve used both technologies in my Enterprise Storage environment. IBM gave me the SVC within the margin of a large Mainframe Server deal. With hours of IBM professional services to get the SVC configured correctly we were finally able to migrate data.

When the USP-V was released in 2007 I convinced our Management to acquire the technology for data migrations. The USP-V was as close to a plug and play technology as I have ever worked with. After configuring the USP-V and migrating our Data Warehouse in a weekend I removed the SVC clusters and haven’t looked back.

Hu Yoshida on 18 Nov 2008 at 7:47 am

Thanks to Customer Storage Expert, I am ending this thread.

ScottF on 25 Nov 2008 at 5:23 pm

Hu,

It’s fairly obvious that you don’t really understand how the SVC works. I’ve used USP-V’s and SVC clusters, and while I’m sure the internal workings are different, they do very much the same form of virtualization and accomplish the same goals. The main difference is that one is an appliance and the other is an array + virtualization capability.

The main differentiator for me is that I can upgrade a SVC cluster to the newest generation of hardware without an outage, while I can not upgrade the USP-V to the latest hardware without an outage. We’re back to a frame-level migration, just like the old days (i.e. host must be re-zoned to the new frame… that’s an outage).

[...] of you may recall that late last year Barry Whyte of IBM and I had a discussion about storage virtualization as we do it with the USP V and San Volume Controller or SAN virtualization as IBM does it, using a [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site