United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Virtual Ports and Host Storage Domains

by Hu Yoshida on Apr 24, 2006

I find that many people are confused about the value of virtual ports and host storage domains in HDS storage control units. This feature enables the attachment of 1024 heterogeneous  FC host ports to one physical storage port on the USP or NSC control unit. (This feature is also available on our AMS and WMS storage arrays with 128 virtual ports per physical port.) Each FC host connection comes through a virtual port and is assigned its own address space or host storage domain, which can not be seen or accessed by any other virtual port. This provides the scalability of host connections and safe multi-tenancy which is required when many application share the same physical resources in a virtualized environment. A host storage domain can be shared with another virtual port on a different physical port for alternate path support.

Many other storage vendors claim to have the same capability by defining it as the ability to connect multiple hosts to the same physical storage port with LUN masking as a way to provide safe multitenancy. Let’s see what the differences really are by contrasting FC host connections on non HDS storage controllers with HDS Virtual Ports and Host Storage Domains.

First, one needs to understand that different host platforms use different SCSI code pages. This requires that the storage port that it connects to must be mode set to speak to a particular host platform. In the diagram below, lets say that Host 1 is an HP server and a storage port must be assigned and mode set to speak HP-UX. Host 2 is a SUN server and another storage port must be assigned and mode set to speak SUN Solaris. Host 3 and 4 are both windows servers so they can both share a storage port that has been mode set to speak windows.

Notice that Host3 and 4 must share the same address space, so LUN masking is used to assign LUNs 0, 1, 2 to host 3 and Host 4 is LUN masked to LUNs 3 to 7. However if Host4 should die for some reason and does a reboot, he will do a scan of all the LUNs on that storage port before the LUN masking is enabled.   

The figure below shows how virtual ports and host storage domains are used. First, the mode set is done at the host storage domain and not at the port. Therefore all hosts, HP SUN, Windows, etc can use the same physical port.

Each host is assigned its own address space even if they share the same physical port. Each address space starts from LUN 0 which is useful for boot address. If host 4 dies and reboots, he can only scan his assigned host storage domain. He can not see anyone else’s host storage domain

Virtual ports simplify the assignment of physical storage ports and enables them to be fully utilized. Most applications only transfer around 5 to 10 MB/s. With FC ports going to 4Gb/s and 10Gb/s in the future, we would not be able to take advantage of this increase in bandwidth unless we were able to virtualize the storage ports. 

Host Storage Domains provide a key requirement for consolidation and virtualization, which is safe multi-tenancy. I liken this to having us all stay at the same hotel where we each have our own private room, versus having us all share a common dormitory room.

While the main focus of virtualization is on the remapping of storage addresses, a virtualization solution without virtual ports and host storage domains would not have the ability to scale to 10s of 1000s of host connections and guarantee a secure multi-tenant storage environment.

Related Posts Plugin for WordPress, Blogger...

Comments (6 )

Pq65 on 25 Apr 2006 at 9:03 pm

Hi Hu,

I always find your posts very enlightning and very insightful.

You mention in your post that 1024 initiators can connect to a Physical Target port. To me that says that each Physical Storage Port has a queue depth of 1024. Assuming a reasonable LUN Queue Depth (32), you can not have more than 32 outstanding I/Os per LUN on that port, and more than 32 LUNs for ANY host type without being in danger of overrruning the target queue depth.

Of course you can connect 1024 initiators but with what host queue depth? 1? ;-)

Regards

Alex, MSc student, Edinburgh on 27 Apr 2006 at 11:40 am

LUN masking can be done either at storage controller or server level. I agree that host-based LUN-masking is not the right thing to do. However, all midrange disk arrays I know support LUN-masking at the storage controller level. In this case rebooted servers would not affect other hosts.

Most storage systems I know support differents types of hosts, which are assigned to host-adapters WWN’s. Solaris and Windows hosts can be attached to the same physical port and have different address spaces (LUN’s). This is true for all midrange disk arrays I know and even for some low end boxes.

So I still don’t see any advantages, basically you do exactly what everybody else is doing, you just call it diffirently.

I don’t know why HDS keeps mentioning this thing as their advantage, this is nonsense comparing to other real advantages they have.

The number of hosts that can be attached is another story, but personally I’ve never seen more than, say, 30 servers attached to one storage box. Even if we multiply this by 2 for multipathing, this is still way below the limitation of most products on the market. So this is rather theoretical advantage.

Hubert Yoshida on 27 Apr 2006 at 12:28 pm

Thanks PQ65. You are right, the maximum limit is theoretical and is based on a maximum queue depth of 1024.

Hubert Yoshida on 27 Apr 2006 at 2:18 pm

Alex, MSc student Edinburgh

Hi Alex. There are many levels of mode setting, and one platform may have several depending on configuration differences like clustering. SUN and Windows can share the same mode setting in some cases as you point out. In our systems we give them different mode settings.

There are actually four places you can do LUN masking. In addition to the server, you can do it at the storage port or at the LUN level. In our case we add a fourth level which is a Host Storage Domain, HSD. In a storage controller without an HSD you can only do it at the Port or LUN Level. While LUN level masking may seem to have an advantage since it is more granular, it adds to complexity. With HSD, we mode set the HSD then drop LUNs into the HSD without having to mode set or mask each LUN. Each HSD is a separate address space so each HSD can see LUN 0 and assign LUNs consecutively. This helps to reduce the complexity of LUN management.

I admit that I am not an expert or a user of other vendors storage, so please let me know how others are doing this.

I agree that this is not a key feature, compared to the other features in our storage systems, but I believe it is a differentiator in our overall virtualization solution.

Nigel on 11 May 2006 at 12:21 am

Hi Hu,

Ive seen a lot of HDS Enterprise implementations where only one host type is assigned to a port such as –
CL1-A – HP-UX hosts
CL1-B – AIX hosts

..
And the people who designed and manage these environments can get quite religious about not mixing host types on the same port.

Im aware that you can mix different host types on a port using HSDs, but are there any performance or other advantages to not mixing different hosts types on the same port?

Strangely none of these people have ever been able to give a valid reason for this and I just assume that its something they have carried over from other vendor implementations.

Thanks in advance
PS. Great blog

Ben on 09 Jun 2008 at 2:07 am

Any chance you could fix up the hosting of the images? they’#re currently broken links.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site