by Michael Hay on Sep 14, 2012
Scaling-out is a well-oiled, in vogue term in use today. There are, of course, other related terms around scaling like scaling-up, scaling-down or my recent favorite scaling-right. In different contexts these terms imply different things– for instance, whether or not we can use the term scale-up to mean that a product or technology is moving from one market segment to another. For example, “NetApp is still attempting to scale up their product to be enterprise class.” In another case, however, we can use it to talk about a product improving an attribute like, “Hitachi can add multiple Virtual Storage Directors to scale up performance.” I think that the term scaling-down is more interesting and the first time I heard about it was in 1999 in reference to the Linux kernel from Linus Torvalds.
Delivering last night’s keynote to a boisterous LinuxWorld crowd here, Torvalds prodded open-source programmers to shy away from the “sexy” task of scaling up the OS to compete with commercial Unix flavors. Instead, he said, programmers should actually focus on scaling down the operating system for user-friendly use on devices from desktop PCs to PDAs. (ZDNet, Linux takes aim at the desktop, 1999)
HDS is one of the few companies I know that intentionally scales down capability and function from enterprise class systems into midrange devices. Here are four examples:
- Mainframe MLPF/LPAR to Intel architecture LPARs on our Compute Blade platform
- Enterprise Storage Shadow Image LUN/Volume cloning to our midrange storage
- Mainframe class bus fabrics and I/O capabilities to intense I/O expandability, and bus based fabrics on our Compute Blade platforms
- Enterprise storage-based thin provisioning to our midrange storage
We take this path for many reasons such as ensuring core feature consistency and another is that we recognize when a capability or feature is mature enough for our enterprise customers it is sufficiently hardened for consumers of midrange or distributed systems. Obviously, this might make you wonder what’s up our sleeve?
I think the title of this blog might be better stated as: “What you can predict from Hitachi implementations today?” I’ll illustrate this point through example. A long time ago in a storage universe not so far away Hitachi sold both the enterprise class 9980V and the midrange 9500. Towards the end of the product lifecycle we introduced something called “cross system copy” that allowed a user to replicate data from the 9980V to the 9500 for disaster recover type purposes. A little while later, we debuted Universal Storage Platform (USP), which was one of the first products that embedded storage virtualization inside an intelligent controller — an approach that has ultimately proven to dominate in the market. In this example we were able to trial storage virtualization for a limited use case, disaster recovery, to observe a set of core behaviors. Learnings from these observations were then placed in a more general feature of storage virtualization around the year 2005.
As I touched on briefly in a recent Speaking in Tech podcast, if we look at the HDS enterprise storage portfolio today we have a lot of interesting IP –Hitachi High Availability Manager, non-disruptive migration, rock solid UVM, pervasive use of Intel microprocessors, value added firmware, etc. What might you imagine we’d scale down next?