United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Scale up for virtual servers!

by Hu Yoshida on Oct 19, 2009

Monolithic Storage Systems Developed for Mainframe Virtualization

Having been in the storage industry for some time now, I have the benefit of historical perspective. I started out when mainframe storage was the only external storage available. Mainframes were the original virtual server, built for running multiple partitions of concurrent applications which drove tremendous I/O loads across special processors called channels.  In order to support this type of workload, storage vendors had to build monolithic storage systems, that had multiple processors on the front ports to match the I/O load of the channels, a large global cache that could serve a consistent cached data image to multiple, load balancing, storage port processors, back end processors that could write the data to backend storage, and still other processors that could move data for business continuity. EMC developed the Symmetrix; IBM developed the Shark; and Hitachi developed the Freedom 7700 built around these features to address the I/O requirements of mainframe virtual servers. With a global cache, monolithic storage systems can scale up and out by adding front end processors, back end processors, and cache modules.

Lower Cost Modular Storage Systems for Non Virtualized Workstations

Shortly after that, work stations began to externalize their storage through the advent of the SCSI interface. Since workstations served a single operating system and also drove the I/O through a Host Bus Adapter, the I/O load was very low. As a result storage systems for workstations did not need the high end functions and costs of a monolithic storage system. All they needed was a single storage processor that could handle all the I/O workload from the front end port, to the cache, to the backend ports. For protection against data loss a second processor was added in standby mode. The write data in the primary processor’s cache is replicated to the secondary processor’s cache for write data protection. It must be noted that a two processor controller system is not a high availability system. Even if the data is protected when one processor fails, good practice dictates that you stop and fix the failed processors, before the secondary processors fails. This was not a concern since maintenance windows were usually available for applications running on a non virtualized server. Since the two storage processors (storage controllers) could be mounted in a drawer in a standard 19 inch rack and disk drawers could be added into the same rack and daisy chained to the controller drawer, these storage systems became known as modular storage systems. Modular storage systems with two storage controllers are limited in their ability to scale up or out.

An Approach to scale Modular Storage Systems

Because of their lower cost and simplicity of configuration, modular storage systems have become very popular, and it is predicted by many analyst that 80% or more of storage capacity will be on modular storage and the demise of the monolithic storage is imminent. However, modular storage systems lack scalability which is a major disadvantage in a world exploding with data. In order to overcome this issue we are starting to see a movement to scale out these modular systems by clustering them around Rapid I/O, Ethernet, or other types of switch technology. Scale out works as long as the workload also scales out. It does not work if the workload requires scale up.

Virtual Servers Are Driving the Need for Scale up Storage

While the demise of the mainframe is still debatable, we are seeing the rise of more virtual servers like VMware and HyperV. The future of servers is clearly virtualization and the requirements they place on storage systems is similar to that of mainframe servers. Now there are 5 or more applications running on the same server platform and I/O load has increased in proportion. Also when storage is not available more applications are affected. This type of workload requires monolithic storage that can scale up as well as out and provide the performance, availability, and scalability required for virtual server environments. Simply put a cluster of modular storage systems won’t be able to hack it. If you are looking at heavy/dense Hypervisor/Virtual Server workloads, you would be better off with a monolithic DMX, DS8000, or USP V than a scale out VMAX or XIV. EMC and IBM both understand this and are hanging on to their aging monolithic storage systems in case the server virtualization market really takes off and the VMAX and XIV can’t cope with the workload demands. . While the initial acquisition costs of monolithic storage are perceived to be higher, the reality is that the operational costs of dealing with a cluster of loosely coupled modular storage controllers, leads to a higher TCO. The reason is that you constantly have to fire fight controller availability issues and deal with the sprawl problem inherent with modular storage

A Scale out and Scale up Storage Solution for Virtual Servers

Today the best solution would be a monolithic USP V with low cost modular storage virtualized behind it. Virtual server workloads also tend to be more random than non virtualized workloads so flash drives in the front end USP V improves random read performance and is another example of how this combination can scale up. Here you have the best of both worlds, a monolithic front end that can meet the increasing performance and availability demands of virtual server workloads and low cost back end capacity. Instead of simply scaling out by attaching more and more VMAX nodes to a VMAX switch, you can scale out by attaching modular storage behind a USP V and also scale up by leveraging the monolithic capability of the USP V. My fellow HDS blogger, Michael Hay calls this “Cartesian Scaling.”

Related Posts Plugin for WordPress, Blogger...

Comments (3 )

Jon Toor on 19 Oct 2009 at 10:53 am

Hu aptly points out that storage requirements have come full circle, returning to the scalability demands of mainframe storage. This is certainly true both for storage and for storage I/O.

Hu further observes the historical parallels in I/O, namely the exitence of “channels.” There is another parallel as well.

In the mainframe world, a device called the “ESCON Director” facilitated dynamic I/O management. Now Xsigo’s “I/O Director” performs the same role in the open system world and for the same reason. Bandwidth is a resource that can be used efficiently if you have the tools.

X86 servers didn’t originally demand much I/O. But that’s all changed now. With virtualization driving more bandwidth and a greater number of connections, the time has come again for smarter more flexible I/O management.

Barry Whyte on 21 Oct 2009 at 4:01 pm

Hu, good to finally meet you last week. However as always I think this over simplifies things.

With a single monolithic “head”, no matter how many modular storage devices you add behind the monolith. The scale up model will soon hit a bottleneck in the monolith.

Wouldn’t a scale out approach with monolithic controllers behind, or partly behind it be better?

Scale out first, scale the performance, the monoliths support the mainframe, and you can scale out as many “heads” and modular blocks as you need behind a clustered system surely better meets the needs you present here?

Why would one limited monolith be better than an unlimited modular scale out approach? Am I missing something?

Hu Yoshida on 22 Oct 2009 at 3:26 pm

Hello Barry, yes it was a pleasure to meet you in person at Storage Expo in London last week. I always seek the opportunity to meet people I know in the blogging community

First let’s agree on what we mean by scale out and scale up. In scale out, workload is distributed to many processors which work independently. In scale up you combine the work load and concentrate it on a larger processor or a multiprocessor that can apply multiple processors to that concentrated workload. Let’s also agree that server virtualization will concentrate multiple operating systems and applications onto one server and will drive up the utilization of the server and create a greater I/O load than a non virtualized server of comparable processing power. Therefore server virtualization requires storage systems that can scale up.

Yes, you are missing the point about the advantages of a “single monolithic head” for workloads that scale up. A monolithic storage “head” is really a large multi processor that was designed for scale up from the beginning. It consists of multiple processors that share a large global cache and can load balance the I/O workload of a single server across multiple storage port processors, and process different parts of the I/O load with other processors for de-staging to backend disk arrays and replication to disaster recovery sites. The DS8000 is an eample of a monolithic array designed to serve the I/O load of Mainframe Virtual Servers.

Our USP V is a monolithic array that not only scales up, but can also scale out with 10s of thousands of virtual storage ports and PBs of internal and external storage. With virtualization, the backend processing is offloaded to the virtualized storage array processor. Additional features like Dynamic Provisioning also provide additional performance through wide striping of internal and external storage arrays.

Modular storage “heads” consist of one active processor which has to do all the work that is done by multiple processors in a Monolithic “head”. No matter how many modular storage heads you cluster together or scale out, each modular head has only one active processor that can service one host server at a time. This is scale out. Not scale up.

If you use the SVC for this you have the additional over head of mapping managed disks to virtual disks, and your SVC processor has to work a lot harder with a limited amount of cache, especially if you also add the workload of thin provisioning, and data migration.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Recent Videos

Switch to our mobile site