United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

SAN Volume Controller Revisited

by Hu Yoshida on Sep 17, 2009

Some of you may recall that late last year Barry Whyte of IBM and I had a discussion about storage virtualization as we do it with the USP V and San Volume Controller or SAN virtualization as IBM does it, using a cluster of appliances that sits in the middle of the SAN. SVC virtualizes the SAN so that it looks like disks.

I recently had the opportunity to visit an SVC customer and found out that they have been reading our blogs and were very familiar with the discussion between Barry and myself. While they were very happy with the SVC’s ability to migrate data off of an EMC frame onto an IBM frame, they ran into difficulty when they also tried to migrate the SAN from Brocade switches to CISCO switches. That’s when it became apparent to them that IBM was really virtualizing the SAN and not the Storage.

As you may know the SVC sits in the middle of two SAN’s. One SAN presents virtual disks to the applications on the front end and another SAN presents managed disk to the storage arrays on the backend. The SAN’s have to be carefully zoned so that they do not overlap. My memory may not be correct but I think Barry also mentioned the need for another SAN just for the ports on the SVC clusters. This zoning of two or three SAN’s and the scrambling of LUNs between managed disks and virtual disk is what was causing the difficulty in migrating the SAN Switches. So whatever benefit they gained from migrating from EMC to IBM storage they paid for in the SAN migrations. Since the USP V does storage virtualization, it uses the LUNs in the external storage as though they were USP V LUNs, providing all the services of the USP V to storage systems that may not have any of this capability natively. It does not need to add additional SANs or scramble the LUNs for remapping. SAN switch migration for the USP V would not be a problem.

I asked why they were converting from Brocade to CISCO, and their answer was that they were planning ahead for FCoE. I pointed out that the SVC may work well in a FC SAN, but may have to do a lot more work to guarantee delivery of packets in a lossy network like Ethernet. The SVC will have to be reworked in order to work in a non FC environment, where packets may be dropped when the network gets congested.   Since the USP V does its virtualization in the storage controller, we would be able to convert the front end ports to FCoE ports and not do a major revision of the storage virtualization functions.

So as I said in my first post on this subject. The difference between the SVC and USP V virtualization is the difference between SAN virtualization and storage virtualization

Related Posts Plugin for WordPress, Blogger...

Comments (35 )

Stephen Petersen on 17 Sep 2009 at 8:22 pm

So when can we expect to see FCoE front end ports? Surely that would make for some interesting solutions.

IBM continuously try to sell us SVC and I continuously tell them about the USP/USP V that we have.

Chris M Evans on 17 Sep 2009 at 11:38 pm

Hu

Sounds to me like the customer wasn’t using their SVC correctly. I don’t recognise the concept of “multiple SANs” to manage SVC (or USP V for that matter). A fabric is a fabric to my mind, wouldn’t the same logic also apply to UVM? Every implementation of SVC I’ve seen has ended up very fragmented and therefore difficult to manage.

I suspect that the major issue with any virtualisation is the way standards are applied. Being lazy and assuming virtualisation equals less effort and thought makes it a recipe for disaster. Have a look at my last blog entry posted earlier this week; I mention SVC head to head with USP V as the competition falls away. Are Hitachi planning a diskless version?

Chris

Martin G on 18 Sep 2009 at 12:27 am

Um Hu,can I respectively suggest that you get someone to brief your goodself on FCoE and Data Centre Ethernet; it’s not lossy! iSCSI may be lossy and traditional Ethernet may be lossy.

But FCoE on Data Centre Ethernet implements flow control as we are used to on traditional FC. To all intents and purposes, FCoE is fibre channel as we are all used too; it should not be a huge stretch for IBM to implement FCoE in SVC.

Barry Whyte on 18 Sep 2009 at 2:31 am

Here we go again…

So a few points :

a) If the customer is aware of us and reading, please contact me as there should be no issue in converting between switch vendors – we support mixed SAN environments and I’m sure we can help. As Martin G point out in his “I spy FUD” post, Brocade have a great roadmap for FCoE and there is no reason to convert just for this reason alone.

b) I guess you don’t quite understand FCoE (maybe this was another ghost written post – or was this one your own?) As Martin points out FCoE is over DCE not GigE fabrics. This is fibre channel – lossless. Not iSCSI. Your suggestions that SVC can’t support FCoE easily in the future. I’d suggest that we can support it much easier than USP. Simply swap the FC PCIe card in the node with an FCoE HBA and et-voila. OK, so there are necessary software updates that will go along with that. However, if we wanted we could offer this as a field upgrade to existing hardware… would you do the same with USP?

c) I’m also struggling to understand your conclusion. How does this statement mean you can conclude (incorrectly again) that SVC virtualizes the SAN? Do we manage VSANs? Do we manage Zones? No. We virtualize DISK. Its not all just about moving luns around, maybe you should come for an SVC customer pitch and you will see its as capable as USP, and in some cases more so.

Joshua Sargent on 18 Sep 2009 at 8:14 am

Wow, Hu…I’m not sure where to begin with this post! =)

The SVC most certainly does not sit in the middle of “two SANs.” It also definitely does not require “another SAN” just for the ports on the SVC cluster. It requires two fabrics for redundancy, just like every other storage environment, including Hitachi’s.

The SVC does not “virtualize the SAN.” It virtualizes volumes, pure and simple. SAN switch migration with the SVC is not a problem if done correctly, and don’t take that to mean that it’s some complex operation, either! I’ve done several McData -> Brocade and McData -> Cisco, and yes, Brocade -> Cisco migrations for SVC customers without much planning and always without issue.

And who is planning to implement FCoE on a lossy Ethernet network?! This customer you reference surely knows what you have missed – that FCoE gets deployed on lossless CEE, not standard Ethernet. You suggestion that the SVC will have to be “reworked” for a non-FC environment is pure FUD.

Sorry Hu – this post is wrong in so many ways, you should honestly consider retracting the post altogether.

Josh

sebastian thaele on 19 Sep 2009 at 12:40 pm

Wow, it’s hard to imagine, that this entry was written by the CTO of a storage company. I hope it was done just with the intention to do some FUD (you know “There is no bad publicity”) and not because of lacking knowledge about SAN fundamentals.

Hubert Yoshida on 20 Sep 2009 at 4:14 pm

Thank you all for your comments.
I will respond by commenting on Barry’s point first:

A) The difficulty of converting a SAN with SVC embedded in it came from an SVC user. I had not thought of this, but it reminded me of your blog of 11 Novmber 2008 where you described how you did the zoning. SVC.https://www.ibm.com/developerworks/mydeveloperworks/blogs/storagevirtualization/entry/svc_hiw_part_2_import
I quote:
“Pre-application stop work…

Create new zone on switch that conatins SVC ports
Create new zone on switch that contains disk controllers and SVC ports
Create new zone on switch that contains host and SVC ports
Create a new “host” at the controller that contains the SVC ports
Create an SVC object that contains the host ports

Stop and quiesce the application…

Unmap the LUN from the host at the controller.
Re-map the LUN at the controller to the SVC “host” object
Create a new vdisk that is in image mode using the existing “LUN” or mdisk
Map the new vdisk to the SVC host object
Re-scan the host to detect the SVC vdisk

Start the application…”

To Chris Evans and Joshua Sargent’s points, this sounds like two or three SAN Zones, which we could be partitioned subsets of a fabric, but requires restrictions of name services.

Back to Barry’s points…
b) I do not understand how you can put an SVC between two DCE networks and guarantee delivery from the host to the disks behind the SVC, I would love to have a briefing on this from any of our readers. And, Yes I write all my blogs and posts.

c) I am simply agreeing with the developers of the SVC who very appropriately named this product the SAN Virtualization Controller. This was not named the storage virtualization controller. Weren’t you a part of this development team? You are virtualizing the SAN to look like virtual disks. It only works when it is in the middle of two SAN zones. It does not virtualize storage directly. The SVC virtualizes Managed disks which are presented through the SAN by remapping them to virtual disks. Why do you need to create another layer of management? Once the mdisk is read into the SVC cache you have a virtual image of the mdisk, why don’t you apply all the power of the SVC in managing the cache virtual image instead of creating a new management construct called a vdisk? I would be happy to attend an SVC briefing. Could you set that up for me?

Wolfgang Singer on 21 Sep 2009 at 3:52 am

Hu,
we met at the Storage Networking World and at some event in Switzerland. I respect you and your opinions. However, as seen from the posts of Barry Whyte and Joshua Sargent you must have been blatantly misinformed by some of your employees/colleagues. I would have some serious talks with them…
Please do not retract the post althogether, since it serves as a good example of misinformation about the SVC.

Best regards, Wolfgang

Chuck L on 21 Sep 2009 at 6:47 am

I’ve got to say that as an end user of the IBM SVC, I am surprised that anyone would have any problems migrating to different switch types. We have just purchased our first Hitachi storage unit (USPV) and we plan on placing it behind the SVC and migrating data from our current IBM storage unit. We love the SVC but plan on looking at the Hitachi virtualization over the coming year. If we find the virtualization equal in easy and performance as the IBM SVC then as with everything else in today’s economy, it will come down to the costs of using the SVC verses the cost of the Hitachi virtualization.

Christophe Bertrand on 21 Sep 2009 at 8:35 am

Hi,

Christophe Bertrand here from HDS reacting to a comment.

Sebastian, perhaps you should identify to the readers where exactly you work and why you’d you make such a comment on this particular post? In any event, a quick Google search will suffice.

Thanks

Joshua Sargent on 21 Sep 2009 at 11:09 am

Hu – a few points of clarification might help…

A. Zones are not fabrics. Zones are not SANs. The fact that the SVC requires additional zones certainly does not make it a fabric virtualization product.

B. I do not understand why the SVC would need to handle lossless FCoCEE (or FCoDCE, or FCoDCB) traffic any differently than it currently handles lossless FC traffic. Do you?

C. “SVC” stands for SAN Volume Controller, not “SAN Virtualization Controller.” Very appropriately named in my opinion, as it controls VOLUMES which reside on the SAN. It takes some physical VOLUMES, abstracts them, and creates virtual VOLUMES. It does not abstract physical fabrics to create virtual fabrics. (eg VSANs on Cisco MDS switches)

As to the reasoning for creating vDisks in the first place, the benefits are numerous! Here are just a few obvious ones:

1. Wide striping across multiple mDisks.
2. vDisk Mirroring (one vDisk with its contents mirrored on multiple mDisks)
3. Single point of management (all storage provisioning happens at the SVC, so users don’t need to know how to perform daily provisioning tasks for multiple vendor’s systems.)

Of course, if you don’t want to abstract the volume, you always have the option of using an “Image Mode” vDisk. Customers can also migrate their volumes from Image Mode to Managed Mode (abstracted) or vice versa…all non-distruptively.

So, Hu…please do attend an SVC briefing in the near future. How can you compete if you don’t understand your competition?!

Chuck L – Check to make sure the features you enjoy on the SVC are available with the USPV…including the ability to easily and cost-effectively remove/replace the virtualization device when the time comes. This operation is trivial for the SVC. You’ll also want to verify the performance of virtualized storage behind the USPV, as HDS hasn’t published any SPC benchmarks using virtualized storage…at least that I know of. SVC’s benchmarks are there for everyone to see.

Christophe Bertrand – why does it matter where Sebastian works? His post did not attempt to give readers the impression that he was a non-biased third party. If he had, then I would agree with you…but in this case he clearly did not mis-represent himself. For the record though, (since you’re checking) I do not work for IBM.

Sebastian Thaele on 21 Sep 2009 at 3:09 pm

Christophe Bertrand – If it’s important for you to hear it from me: I work for IBM’s customer support company. I have a plain technical job there, far away from anything “marketing-like”. I solve technical problems, also with the SVC and I’m well aware which information about me are easily accessible via google. My post was an expression of my honest surprise about Hu Yoshida’s statements and it was a plain personal comment – I do not represent IBM’s opinion. That’s why I didn’t write “Sebastian Thaele, IBMer” or something like that. From your reaction, I see now, that commenting a technical-looking post on a storage blog that way was naive, even if the post is full of misleading information.

Hu Yoshida – It’s not the “SAN Virtualization Controller”, but it’s called “SAN Volume Controller”, because it “controls” (by virtualizing and managing) volumes in the SAN. But hey, that’s marketing. Let’s stick to the technical facts.

Disclaimer: All products are fine, don’t let my post influence your buying deciscion! :-)

Malcolm Muir (HDS-er) on 22 Sep 2009 at 6:05 am

Well Hu’s blog has certainly taken on a life of its own and managed to stray a fair distance away from Hu’s initial point – that the SVC requires excessive zoning compared to the USP V .

Hu’s suggestion that the SVC is virtualizing the SAN, rather than storage, is based on the extensive re-zoning exercise that one must incur (endure) as part of the SVC implementation process. It is a fact that it takes (2) zone entries for the SVC for every (1) zone entry for the USP V (when the external direct attach approach is used).

This is because Hitachi offers options to deploy storage virtualization on the edge, or through the fabric should one want to deploy it this way. Storage virtualization promises to reduce complexity – also, USP V has no need to remap LUNs to managed disks and virtual disks and of course no need to rezone the SAN at so many levels.

And when an SVC customer needs to deploy multiple IO groups (pairs of SVC nodes), which BTW happens more often than not, given the SVC platform’s scalability challenges, then the number of management points for both zone entries and vdisk/IO group ownership become a management nightmare. Remember storage virtualization is supposed to reduce complexity!!!

I am sure it was for these reasons that the customer Hu spoke with said they had major challenges when migrating their SAN

Barry Whyte on 22 Sep 2009 at 1:11 pm

Malcolm,

How many people have just one zone on their fabric? I’d suggest that every storage controller has its own zone, every host (or at least OS flavour) has its own zone… Most people don’t want a windows host to be zoned with an AIX host, for security as much as anything else. Its just good zoning practice to keep things apart. In that way SVC actually is no more complex to zone than USP. You create a zone for the backend disks containing SVC (then never have to touch again) You create all your host zones for the different hosts and zone them to all or some of the SVC ports. Saying SVC needs 2 when USP needs 1 is a mute point and a speculation at what issues the customer may have been having.

As for the comments about I/O Groups. Its quite simple – its scaleable. Since SVC can start low and scale out as your needs must, it means we have a low entry point cost, can scale to large clusters (better performance than USP – just check out SPC) and therefor allows users to grow. The vdisk ownership is not an issue, as you need more performance, you add more node pairs and start creating vdisks on them. The backend storage is GLOBAL to all nodes, so there is no scalability challenge.

The outlay of buying a USP-V that happens to do a bit of archive like virtualization on the side, compared to the outlay for buying a scalable dedicated virtualization appliance is very different.

The FUD being spread about “complexity” “2 vs 1 zones” and Hu’s lack of understanding at a technical level are what has caused the stir, and really, its all FUD and will remain to be so unless the customer concerned actually quoted what issue they may have been having.

Sebastian Thaele on 22 Sep 2009 at 1:36 pm

Malcom,
you write: “It is a fact that it takes (2) zone entries for the SVC for every (1) zone entry for the USP V (when the external direct attach approach is used).”
This _would_ be a fact, if you would have a host-to-storage-ratio of 1:1 (simplified). But from my experience with customer environments, there is a very small number of customers having a ratio near to 1:1. And all of the few ones I know, have this ratio, because they have only a few but very powerfull and demanding host systems. The zoning in these cases are just a hand full of zones.

The majority of the zones are host zones, which you need in the same amount if you have an USP V and I hope you advice your customers to have a granular host zoning as a best practice like anybody else, too.

Disclaimer: All products are fine, don’t let my post influence your buying deciscion! :-)

[...] on from Hu Yoshidas apparent blunder over FCoE and lossy networks, I thought I’d do my bit to clear things up and shed some light.  Knowing a thing or two [...]

Malcolm Muir (HDS-er) on 23 Sep 2009 at 5:24 pm

To both Barry and Sebastian, you probably misunderstood my statement – “when the external direct attach approach is used”, meaning when direct physical external connections to the virtualized storage is used from the USP V, they don’t require zoning because they do not pass thru the SAN fabric.

See the following diagram on flickr and check out the massive resources and scalability that the USP V offers, eg. 224 ports versus 8 (per SVC IOG or 64 per cluster) – http://www.flickr.com/photos/hitachidatasystems/3948460361/

So I’ll agree on one thing, that the ratio of zones of “host to switch” versus “switch to storage ports” is not 1:1 but once again my point is there is no requirement to define zones for “switch to virtualized storage ports”, when the “external direct attach approach is used”

And to Barry, re: “…can scale to large clusters (better performance than USP – just check out SPC) and therefore allows users to grow.

Re: SPC-1 comparison – tough to compare apples to oranges. The SVC SPC-1 benchmark used a max 15ms response time and the USP V used a max 5ms. So find the 5ms intersection point in the SVC SPC-1 curve for a more apples to apples comparison. Not to mention that the SVC SPC-1 benchmark was carried out with 8xSVC nodes, virtualizing 16xDS4700 modular arrays, versus a single USP-V. So I guess this is not an apples to apples comparison here.

And also to Barry, re:”…. The vdisk ownership is not an issue, as you need more performance, you add more node pairs and start creating vdisks on them”

Re: vdisk ownership – vdisks are an I/O group owned (not cluster owned) resource and must be managed as such, meaning when one has to go through a vdisk I/O group workload balancing exercise and move vdisks from one IOG to another, this is disruptive to applications. This restriction does not exist on the USP V, as Tiered Storage Manager can non-disruptively migrate data anywhere within the Hitachi heterogeneous storage virtualizaiton framework (once again see flickr diagram for understanding).

Hu Yoshida on 23 Sep 2009 at 7:05 pm

I am out of the country and busy doing my day job, so I have not been able to keep up with all the comments, so allow me to do some catchup.

First to Wolfgang Singer, you and I have been around for many events. As I mentioned before, I received this information from an IBM customer and from reading Barry Whyte’s blogs.

To Chuck L., Thank you for migrating from IBM to our USP V. Yes SVC can do migration and so can the USP V. Why not use the USP V to do the migration and eliminate the extra steps of defining mdisks and zoning the SAN. It would be more straight forward to insert the USP V in front of the SVC and migrate directly into the USP V. Please drop me a note on how this turns out.

To Joshua Sargant,
A) A FC SAN Zone, according to SNIA, is a collection of N and N/NL ports that are permitted to communicate with each other over a fabric. So you are correct and I took some liberties in equating a SAN with a Zone in a SAN.

B)The Lossless capability of FCoE is between MAC addresses where you manage the congestion loss by setting link priorities, and pausing lower priority links. What if you have multiple data transfer requests that occur at the same time and you have to pause for a long data transfer? Do you get time outs? The concern I have is that the SVC sits in the middle of the exchange between server and storage and acts as a storage proxy to the server. You have a server to SVC FCoE conversation for the vdisk then you have a separate FCoE conversation with the mdisk. The server has to issue an I/O request which goes to the MAC address of a switch port and then to the MAC address of the SVC. No problem. Now there is another FCoE conversation from the MAC address of the SVC, to the MAC address of a Switch to the MAC addres of the mdisk. How do you ensure that the server does not time out because the FCoE conversation to the switch and mdisk has been paused? How do you guarantee that the data the host has written is actually written on the mdisk? I do not claim to be an FCoE expert so I would appreciate learning how this is done.
As I told Barry, I would really like to attend an IBM SVC Briefing.

C) my origional post correctly indentifies the SVC as a SAN Volume Controller, unfortunately I refered to it as a SAN Virtualization Controller in my comment. It was a finger check on my part and nothing more. I appologize you and to Sebastian Thaele and anybody else that I might have offended by this mistake.

SAN Volume Controller is a little restrictive if you want to do storage virtualization. What about DAS, NAS, and mainframe storage? A lot of storage is still on DAS and some analysts like Andrew Riechman of Forrester are questioning the need for a SAN.

All the points that you were making about the SVC are done by the USP V, and more. The USP V can enhance the performance of most external storage through alternate pathing to a very large global cache. It can do multi data center replication as far away as you want, and do migration, tiering, and dynamic provisioning,etc without performance degration. It can scale to hundreds of PB, and provide virtualization for DAS, SAN, NAS, and mainframes.

My question for you is: If there is no FC SAN, could you still provide storage virtualization?

To Ruptured Monkey (Nigel), thanks for the link to your FcoE lesson. While you refer to my apparent blunder over FcoE and lossy networks… though I am still not convinced that prioity setting and pausing can handle congestion for storage. When storage goes, it’s got to go…

Martin G on 24 Sep 2009 at 12:28 am

Hu, I am really struggling now! You claim that the USP can provide virtualisation for DAS and NAS? Virtualisation for DAS? Are you suggesting that you can put a USP between the host and DAS? Directly connecting your host to the USP?

As for NAS? How does the USP virtualise NAS? It certainly can’t virtualise CIFS and NFS volumes unless I’ve missed a huge announcement? Or do you mean that you put a USP between a NAS head and your back-end storage?

My question is has a USP been deployed in a situation with no FC (or FiCON) SAN? Has a USP ever been deployed in a pure IP environment or in a DAS environment?

Bas Raayman on 24 Sep 2009 at 12:56 am

Hu,

in regards to the FCoE congestion handling I would refer you to a blog post made by @virtualgeek. You can find it under:

http://virtualgeek.typepad.com/virtual_geek/2009/06/why-fcoe-why-not-just-nas-and-iscsi.html

Also, for your concerns on congestion and flow handling, I would refer you to the 802.1 workgroup and read up on the 802.1Qbb standard found here:
http://www.ieee802.org/1/pages/802.1bb.html

I will give you that much as stating that this is still in draft, but one of the targets for this implementation is definetly to get the same comparable frame loss characteristics as you would see them in any FC environment.

Sebastian Thaele on 24 Sep 2009 at 8:29 am

Malcom,
Why should I talk about this ratio if I would have misunderstood the possibility of the USP to attach direct access storage? In that case I just would write “SVC and USP need the same amount of storage and host zones”. But in most cases the few storage zones you have to add are just an apprentice piece for SAN administrators.

The comparison of apples and pears and the discussion about how much an apple should cost, I leave to the marketing people :-)

Disclaimer: All products are fine, don’t let my post influence your buying decision! :-)

Hu Yoshida on 24 Sep 2009 at 11:02 am

Martin G.
the USP has up to 224 FC ports which can be connected directly to servers or FC storage. Many of our users prefer to connect external storage directly to the USP to avoid the cost of SAN and management of Zoning. On the front end we can support direct attach but usually users connect over a SAN to take advantage of our virtual ports. Each physical port can support up to 1024 virtual ports, and each virtual port can be mode set to different server types. Each virtual port is assigned its own address space so there is no data leakage that can result from sharing the same physical storage port.

We also support attachment to Mainframes either directly or through FICON switches. We can virtualize a SATA array group behind a Mainframe FICON connection. We have NAS, Archive (HCAP), and VTL gateways that can leverage the enterprise services of the USP V, like block virtualization, Dynamic (thin) Provisioning, replication, etc. Our HNAS system does file virtualization, and when files are allocated to the USP, HNAS can migrate files across tiers of internal and external storage based on policies.
There are no restriction for deploying a USP V in a non FC SAN or FICON network. We do not have a USP V in a pure IP environment today. We used to offer a NAS blade in our previous product, but customers preferred to use gateways so we dropped that option for the USP V.

To Bas Raayman

Thank you for the references you provide. Your comment is enlightening and I encourage everyone to follow your links.
Again my concern is end to end congestion management. With SVC sitting in the middle, I question the ability to do end to end congestion management, from the host to the physical storage. Since USP V is the target to the host servers, we can do end to end congestion management. The FC target that is encapsulated in FCoE when it leaves the host server is the USP V, It is not the SVC sitting in the middle.

Fred Knight on 24 Sep 2009 at 11:14 am

When will FCoE exist on Front End Storage ports? It does today. NetApp is selling their array with the capability of End-to-End FCoE (over lossless Ethernet – aka DCB/CEE/DCE). Their array is also available under the IBM “NSeries” label. It is also available as the V-Series controller – which like the SVC goes between the host and other vendors storage.

So, if you want FCoE ports for your USP, use the NetApp V-Series to frontend your USP and you’ve got it!

Joshua Sargent on 24 Sep 2009 at 5:04 pm

Malcolm – If you have to resort to the fact that the SVC requires an extra 10 minutes worth of work to create a few zones, then the USP is on some seriously thin ice! I’d love to see that bullet in a customer presentation! =)

Hu – thanks for the response… Let’s clear a few things up now, shall we?

A. Thanks for acknowledging the SAN vs. Zone difference. I hate to debate semantics, but this one was pretty crucial.

B. I’m going to hold off responding to the various questions here, because we need to assume a certain level of knowledge regarding which FC functions get replaced by CEE functions and which ones do not. Once you understand that CEE’s main purpose (as it relates to FCoE) is only to provide lossless transport for the upper FC layers, then you would realize that your questions don’t have different answers than they already have today with standard FC fabrics. Hence, there should be a very minimal amount of work required to get the SVC running with FCoE.

C. Thanks for clarifying. It sounds like you are coming around and would agree that the SVC virtualizes volumes and not fabrics, correct?

The features I listed weren’t an attempt to show features that the SVC provides which the USP doesn’t…it was only in response to your question about the purpose of having this concept of vDisks. I disagree, however, that the SVC is “restrictive.”

I’m very puzzled by what you say about DAS, NAS, and mainframes…can you please explain yourself a bit more? If by “DAS” you mean you can attach hosts directly to the USP ports without needing a switch, this is foolish. Switch ports today are so much less expensive than front-end ports on a USP that this argument becomes silly in all but a few cases…mainframe probably being the most common. It is true that the SVC does not support FICON (though it can support Linux on mainframe!), but customers can easily provision their CKD volumes directly to the mainframe, and their fixed block volumes to the SVC…no problem. As for NAS, you can stick a NAS gateway in front of the SVC in the same manner you would put a NAS gateway in front of a USP…so I’m not sure what you’re getting at. Please explain further…

Joshua Sargent on 24 Sep 2009 at 5:14 pm

Martin – I wouldn’t doubt that several USPs are deployed without a fabric, particularly in mainframe environments where there aren’t enough host ports to justify the expense of deploying a fabric. That number of ports is not very high though, before it typically becomes much cheaper to deploy a fabric.

Hu’s point seems to be that with the SVC, you don’t even have the option to directly connect. The point is correct, but amusingly insignificant.

Hu Yoshida on 24 Sep 2009 at 7:15 pm

Fred Knight
Thanks for the comment. The reduction of attachment costs on the server side will drive adoption of FCoE. However, there is a large investment in FC on the storage side, and the demand for FCoE on storage will take some time to develop. HDS will provide native FCoE attach to storage products when demand justifies the cost of support of this interface.In the mean time, switches that provide FCoE to the servers and FC to the storage are available today and the USP V is qualifying on the Switches and CNA adapters as they become available.

The NetApp V-series is more similar to the USP V in its approach to virtualization than to the IBM SVC. NetApp is doing Virtualization in the g-filer controller, similar to our virtualization in the USP V controller.

Martin G on 25 Sep 2009 at 7:37 am

Okay, I understand DAS; you are directly attaching the disk to the USP-V and then presenting through a SAN generally. Theoretically, you could do a SANless deployment, I suspect that it is an extremely unlikely situation for most people.

As for NAS, I can pretty much do that with SVC and a NetApp vSeries. Or more likely I would use a vSeries by itself to front any one of a number of arrays!!

As for SVC, can you explain why the situation is any different for FCoE as opposed to FC? Or do you simply question the SVC’s ability to do any form of congestion management?

Kash Shaikh on 26 Sep 2009 at 10:09 am

All,

if you’d like to hear from someone who is using FCoE over loss less 10 GbE in real life production environment, pls join us for a live broadcast featuring our special guest Derek Masseth, Sr Dir of IT at University of Arizona..

We introduced Cisco Nexus 5000 in March 2008. Nexus 5000 was industry’s first FCoE Switch. We now have shipped Nexus 5000 to more than 1000 customers. 35% of the systems were shipped with FCoE licenses.

Derek is one of our Nexus customer who is taking advantage of the benefits offered by FCoE at the server access layer between CNAs and first hop FCoE Switch over loss less Ethernet links based on IEEE Data Center Bridging Standards.

Incremental approach to FCoE starting at server access layer provides least disruption & investment protection with existing FC and other storage infrastructure, while offering most of the benefits of FCoE, without waiting for an end-to-end FCoE infrastructure.

When: Tuesday, September 29, 2009, 10:00-11:00 a.m. PDT

Regards,
-Kash Shaikh
Cisco

Commodore C. on 14 Oct 2009 at 9:24 am

Instead of putting a propriety controller (whether or not it’s bolted on to an array)in front of storage with the letter “V” in it. (USP-”V”, S”V”C, “V” filer) Why not just use storage that is already virtualized? All these layers of unnecessary infrastructure create data center sprawl. How much is a pair of ports on the USP-V? Seems like a waste of money using them for a tier three SATA drive array. Why not just put the SATA drives in that USP-V instead of having to buy two arrays? Assuming the array scales. Also putting arrays behind a single array can’t possibly increase performance, availability or scalability.

Hu Yoshida on 14 Oct 2009 at 2:25 pm

Hello Commodore, I am not sure what you mean by storage that is already virtualized. If you mean managing storage from different vendors with a common interface and a common set of services, you need something in front. In our case it is a full function enterprise storage controller.

We can put multiple tiers of storage inside the USP including SATA disks and Flash Drives, and dynamically move data across those tiers without disruption to the application. The real estate in the USP V is more expensive than on a modular two controller array. So SATA on external storage is ussually cheaper than internal to the USP V, especially if it is a modular two controller array without the advanced functions of an enterprise array. The reason you would want to connect a low cost modular SATA storage array behind the USP V is to enhance that array by inheriting the USP V’s enterprise functions like Dynamic (thin)provisioning, distance replication, dynamic tiering, path load balancing, and improved performance. In most cases where we attach a SATA modular storage array behind a USP we see 30% improvement in throughput, because of the larger front end USP V global cache which also enables load balancing across the front end ports. It increases availability by enabling critical data to be migrated off the Modular storage during maintence windows. You can scale the capacity beyond the maximum capacity of a USP V by adding Modular storage arrays almost indefinitly.

Commodore on 14 Oct 2009 at 11:09 pm

Hi Hu, Thanks for the reply. IMHO, what is needed to manage heterogeneous storage is software that is open and standards based. For instance customers can use SMI/S SRM packages to manage their storage and SAN (switches and HBA’s too). After all this was the promise of the SNIA SMI-S standard.

This will allow the array(s) to handle the storage centric capabilities as layered applications. And then customers could buy storage that has all of the capabilities that are being positioned as “Virtualization”, like thin provisioning, local replication pooling of common disk types for macro-level administration, QoS, resource partitioning, replication, unified (NAS) connectivity, etc.

This way we wouldn’t have to consume expensive tier 1 storage resources (cache, ports, software licensing, etc) just to do an I/O to a SATA disk could have been furnished faster without the added latency of an in-band solution or without the risks associated with a device that becomes stateful to mask the latency.

As for enhancing a modular array to inherent USP-V enterprise like functions, I contest there are modular arrays that already have those USP-V functions that you mention. Again if I need those types of service levels that are typically associated with a tier 1 monolithic array then the LUN should live there. Availability, Performance, and mitigating risk would all benefit from this less hardware type of approach. Less hardware = higher availability. Not to mention that we wouldn’t have to move the lun off of the modular array to the USP-V if that other modular vendor actually supported online code upgrades when their box is hanging off of the USP-V. Like it is when it’s not for instance.

The use case scenario you site where a lun is temporarily migrated off of the modular array and onto the USP-V to do maintenance on the modular array, it doesn’t work that easy (change control, amount of time it takes to move the lun, You’d have to have that much free room on the USP-V, in which case supports my point that that’s where it should have been anyhow).

It’s just another unnecessary added risk. Why not just get a modular array that could do non-disruptive upgrades? And the performance I already addressed. With an in-band virtualization device you get your choice of added risk and added latency (stateful device) or the virtualization device acts as a pass thru in which case functionality is lost thus removing all value. Well other than the single pane of glass, but again that’s what open SRM packages are for.

As for the single pane of glass, I still have to have and license and administer the modular storage. So now I get to dole out the disk twice. Once to the back of USP-V and then from the V out to the host. And I still owe the modular vendor maintenance money. Not to mention I would owe the inband virtualization controller sales guy money for all of this, as I know have a cheap array behind a tier one array? Then there’s Power / BTU’s of two arrays VS one, etc, etc, etc.

I’m not sure I agree with the statement…..”You can scale the capacity beyond the maximum capacity of a USP V by adding Modular storage arrays almost indefinitely.” There’s only 224 ports in the array? This is for backend, front end, replication, etc right? So I would have to put a SAN behind the USP-V to scale beyond a one-to-one (USP-V interface port to a modular interface port). So now I have yet another layer of infrastructure……..Host -> SAN -> USP-V -> SAN -> Modular Array. And if I want to replicate that LUN….I have to buy storage (Tier 1) to put in the USP-V for the deltas to be packaged up for asynch replication.

In short, when one looks into the details I just don’t see the controller based approach (be it SVC, USP-V, Incipient) as having value when one considers all of the opportunity costs, give there’s already modular arrays out there that provide all of the functionality that is sold as virtualization. It’d be great if vendors that sell controllers to hang heterogeneous storage off of their gizmo put that energy into being open, truly open and letting an SRM package manage the SAN (the entire SAN). Thanks for the dialogue.

FCoE Lesson #1 – Technical Deep Dive on 17 Dec 2009 at 6:23 am

[...] on from Hu Yoshidas apparent blunder over FCoE and lossy networks, I thought I’d do my bit to clear things up and shed some light.  Knowing a thing or two [...]

tbone654 on 03 Aug 2010 at 7:06 pm

I’m an IBM consultant supporting a fortune 100 IBM customer. IBM Sales may know a great deal about the SVC but I can attest to the fact that the delivery organizations do NOT. We have been attempting to install a second SVC to help mitigate performance problems with a first SVC. Very important customer, yet it’s impossible to get installation and startup services, or design services or even some help on how to zone an SVC into an environment. We are constantly told to read the manuals which are vague and contradictory. Recently we were given 4 different answers to the question of how many etherenet connections are required for an 8 node 5.1 implementation with SSPC. 2,3,6,9

Our support organization has many SVC customers but can not find anyone to provide more than a guess at how to design a solution. The answer “Just zone all the ports to one-another” is seemingly the most common reply. I believe that’s how we got into all the performance problems with the first SVC implementation. And no architecture support.

To Commodore’s point… From an HP perspective… why put an XP24000 (monolithic array) behind an SVC when you could put an EVA (enterprise virtual array) behind it for a lot less money? For that matter, why do you need the SVC at all?

Customers are told they will have lower support costs if they put an SVC in between their storage and hosts, but the reality is the SVC only increases FTE requirements dramatically in my experience.

Friends don’t let friends SVC… It’s not supported…

Great conference organisers on 19 May 2012 at 4:10 pm

Many thanks for the insightful article. I found it really useful, keep up the good work!

[...] more for fun was the hot mess that Hu Yoshida got into with SVC back in 2009, with a follow up   later in a tit-for-tat between Hu and Tony Pearson of IBM over the SVC vs VSP [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site