United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Redefining scale-up and scale-out

by Hu Yoshida on Dec 13, 2010

This past year, there have been a number of vendors and analysts who have been touting the virtues of scale-out storage. Scale-out storage is always contrasted with scale-up storage. A typical description of scale-out storage will say that it provides rapid expansion of storage systems through the horizontal addition of storage nodes (servers with cache and disks) in support of fast growing applications, like VMware. Scale-up architectures are described as vertically aggregating lots of individual disk drives behind one or two super-sized controller servers.

These descriptions of scale-out and scale-up are outdated. While the basic concept of scale-out storage is the ability to increase performance, capacity, and throughput by the non-disruptive addition of storage resources, some implementations of scale-out will not accomplish this.

A common way to implement a scale-out system is to loosely couple whole storage nodes across a switch network like Ethernet, RapidIO, or Infiniband. From a storage-centric view, this enables the rapid expansion of storage resources, but from an application view, this does not support the needs of a fast-growing application.

An application talks to a volume, and a volume is created in one of the storage nodes and the only storage resources it can access is limited to that one node, no matter how many other nodes are clustered around it.  In order to make use of this type of configuration, you need a high performance computing or NAS system with a parallel or global file system that can distribute the workload across the nodes. But even with this distribution of workload, each piece of that workload can only access the resources in one node. It cannot access more cache or controller processing power in other nodes. Scaling out by adding storage nodes is also not efficient. Even if you only need more cache, you have to add a whole storage node, which usually includes two controllers with two sets of cache modules and some number of disks.

A better way to scale-out storage

A more efficient way to scale-out storage systems is through the tight coupling of storage resources through an vspinternal switch or network where individual components can be added as needed without having to add an entire storage node.

For instance, cache or processor components can be added to increase performance, without adding capacity or front- or back-end directors. This provides a lower incremental scaling cost and does not require additional software to distribute the work load, since the application can have access to the additional resources as a pool of common resources.

These resources can be partitioned and shared with multi-tenant applications. The partitioning can be done dynamically so that an application can get more resources during peak periods to maintain a QoS. You also can use this with a parallel or global file system and each part of the distributed workload can access a pool of common storage resources. This tightly coupled scale-out architecture can also scale-up when an application like VMware requires more performance to support additional virtual machines, or Oracle RAC needs to expand the capacity of a table with ASM.

What do you think? Do you think scale-up and scale-out storage definitions need to be redefined?

Related Posts Plugin for WordPress, Blogger...

Comments (2 )

Amrith Kumar on 18 Dec 2010 at 6:01 am

I feel that the definition of “Scale-Up” and “Scale-Out” are sufficient and are used in defining a broad range of solutions. Maybe what you are looking for is a new classification of storage systems that is closer aligned to the terms “SMP” and “MPP”.

As I understand it, you are making the distinction between physical storage (the drive, the spinning platter or the SSD) and the immediate neighbors who make this storage usable (caches, processors, techniques for error handling, …)

Maybe what you are looking for are terms like “Symmetric Multi- Storage” and “Massively Parallel Storage” ? These are not good terms but I think they illustrate the analogue between the concept you propose and the terms SMP/MPP.

Hu Yoshida on 20 Dec 2010 at 12:24 pm

Amrith, thank you for your thoughtful comment.

The problem I have with current definitions of “scale-out” and “scale-up” as it is applied to storage is that they are used in an either/or context. It does not allow for a storage system that can do both with more granularity than what is generally assumed for scale-out storage. I also think that analysts and vendors who promote “scale-out” storage are misleading customers by implying that this type of architecture is a low cost, modular way to increase performance.

Your suggestion of applying and SMP and MPP processor analogy to storage is interesting. If I look at the definition of SMP and MPP according to PC Magazine http://www.pcmag.com/encyclopedia_term/0,2542,t=MPP&i=47310,00.asp,
the difference between the two architectures is very clear. Each subsystem in an MPP is self contained with its own processor, memory, operating system, and application which communicates with the other subsystems over a high speed interconnect, while CPUs in an SMP share the same memory, operating system, and application.

If I apply this analogy to storage, most “scale-out” storage would be classified as Massively Parallel Storage since they are self contained storage nodes and like an MPP, it cannot scale unless the application is distributed across the storage nodes. An SMP is similar to what the Hitachi VSP and USP do in terms of increasing performance and throughput. In the VSP/USP, we share the same cache and can share the same storage ports and back end storage. The VSP has the added ability to share its pool of global VSD processors to increase performance and added functionality like dynamic tiering and replication. However, unlike an SMP processor, the USP/VSP connects all the processors, cache, and back-end storage through a cross bar switch which gives it increased modular scalability. In other words, we can scale a VSP/USP by adding redundant pairs of processors, cache modules, and/or internal or external storage depending on whether we need additional sequential performance, random performance, capacity, or additional functionality like Dynamic tiering.

I would be interested in hearing other thoughts on we how can define storage architectures.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site