United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

A Services approach to niche storage solutions

by Hu Yoshida on Aug 25, 2007

While the cost of storage capacity is declining every year the capital and operational expense of storage is increasing every year. Some of that expense is because of the increasing demand for retention of data, but I believe that a good deal of the increase is due to the proliferation of niche storage solutions.  These stove pipe storage solutions, require different management tools for the same storage functions, require duplication of storage and networking resources, and limit the ability to respond to changes. A services approach to storage is required to eliminate these stove pipe solutions.

What do I mean by stove pipe storage solutions and what do I mean by a services approach?

We can start with the biggest niche solution which is the Monolithic Storage Array which is built by only a handful of storage vendors, but runs most of our mission critical storage applications. This type of storage has the highest availability because it can have multiple storage processors access one common cache image of the storage. The loss of one or two storage processors will not cause the loss of data or access to the data since the remaining storage processors can still access the data image in a shared  “global cache” (if it is properly configured for alternate pathing). These systems also have a robust menu of Business Continuance solutions, have the highest connectivity and cumulative I/O rates, and are the only systems that support mainframes.

The next niche solution is the two processor, modular, storage array.  It is generally referred to as modular since the controllers come in a drawer, and are configured with other drawers which contain disk drives.  The main characteristic of this solution is the lack of a global cache that is shared between the two processors in the controller. Each processor has it own cache and the writes are mirrored to the other cache to protect against data loss if one of the processors died. If you lose one processor, you have lost half your cache, half your processing power, and all storage ports that were connected to the failed controller. If you don’t want the risk of another failure taking down the other controller and losing the data in cache, you would stop and take a maintenance window before you continue production. This storage array was designed for open systems when open systems only needed a few ports for connectivity, could afford maintenance windows, and did not require distance replication.

NAS is another niche storage solution, providing shared file access to data. In most cases the storage in NAS systems are modular arrays because it runs on Open Systems, does not require a lot of connectivity to the array, and does not need controller based replication since it can do file replication.

Recently we have seen a proliferation of new storage solutions, that address requirements for disk to disk backup (Virtual Tape Libraries), content archive, nearline storage, data base optimization, Thin Provisioning, etc, and are built as proprietary vendor solutions, and integrated with other vendor’s modular arrays.

A services approach to storage will eliminate the need for all these niche products, by converting these storage solutions to services that can be run on a common platform with heterogeneous storage devices. For instance, instead of each solution developing their own replication capability for a particular vendor’s hardware, they could reuse a common replication service in the platform.  It will also enable these solutions to work together with common management instead of in stove pipes. Instead of a data  base optimization solution which places data on the inner and outer bands of a disk surface to minimize the arm movement, it can use the wide striping of a thin provisioning solution to get more arms in play. There would be no need to lock the platform down to a particular vendor’s disk storage, and no need to compromise availability in order to use lower cost modular arrays.

So how do you develop a Service Oriented Storage Solution that answers all these requirements?  Hitachi’s approach started with the separation of the high availability, multiprocessor, global cache controller from the backend disk arrays in our monolithic storage array. This opened up the attachment of the enterprise class, USP control unit to virtually any vendor’s disk array that attached through standard FC interfaces. Now modular disk arrays could use all the services of a large enterprise class storage system like distance replication of consistency groups, and mainframe attach. Next we added file service by using NAS blades and NAS heads like the HNAS solution we developed with BlueArc, VTL and deduplication solutions with Diligent, and Content Services with the acquisition of Archivas.

Because we started with an enterprise, storage control unit architecture, designed for high performance and scalability, we can continue to add functionality to this platform and make it available as a service to any storage that attaches to this platform as well as to any application that attaches to it for block, file or content services. A case in point is  HDP, Hitachi Dynamic Provisioning, or thin provisioning which we added to the USP V that was announced this spring. Because it is delivered on the USP platform it is available as an enterprise class solution, to any application that can benefit from it. It will work with other services like Shadow Image, and it will be available to any external storage ( due to be released in 3Q) attached to the USP V. Thin provisioning from other vendors require a niche product, with a cluster of modular storage arrays, which will lack enterprise functionality, scalability, and availability. Watch for other services to be added to this platform.

Virtualization is old news. It is being replaced by services orientation. Storage virtualization is most useful as an enabler for services orientation. Services oriented storage is part of the whole movement to services along with Services Oriented Architecture SOA for applications and Services Oriented Infrastructure SOI for the infrastructure. All three SOA, SOI, and SOSS will be required for a dynamic data center. While SOA can reduce the cost of application development and management, it needs a SOI so that is not tied down to a rigid infrastructure for routing and load balancing, logging, etc. It also needs an SOSS so that it is not tied down to a number of niche, stove pipe, storage solutions.

Related Posts Plugin for WordPress, Blogger...

Comments (1)

Stephen Foskett on 27 Aug 2007 at 8:27 am

Hu,

Boy are you right on with this post. Virtualization is a technology, not a solution to a (non-IT) business issue. Same for de-duplication and solid-state drives and the rest. We IT industry folks have to learn to apply all of our cool ideas to real business problems rather than getting carried away with the possibilities of our ideas.

Your service-oriented storage concept is aligned with what I’ve been saying for a long time. It’s not that IT should hide complexity from end users – instead, we must speak to the beneficiaries of our technological capabilities in a way that makes sense to them. Talk about business benefits, risk/cost balance, and return on investment and watch them start applying correct technology.

I look forward to your future commentary as a way to push service-oriented IT concepts!

Stephen

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site