Oh, the Commodity of it All!!
by Claus Mikkelsen on Nov 8, 2009
Sometimes, when writing these blogs, I start seeing myself as more and more of a “storage historian”. Maybe I’ll petition someone within HDS to have my title changed. Well, maybe not; who really needs one?
So, what brought this to light was reading Josh Krischer’s rather (very!) excellent white paper on storage architectures. Although it is entitled: “Storage is Still Not a Commodity: an Updated Comparison of High End Storage Subsystems”, it talks little about the “commodity” part but a lot of the storage architecture part. That’s good. My take on this is that storage is not a commodity, obviously, but it used to be.
Prior to April 1992 (I’ll explain that date in a moment) storage was indeed a commodity. The only metrics us storage vendors had to promote our products was performance, reliability (anyone remember R-Plus?), and price, or as most customers would say: price, Price, and PRICE. Now that’s a commodity!! The April 1992 date was when the first “intelligent” storage function was released by IBM (Concurrent Copy which introduced what now is called Copy on Write). Josh got it a little wrong when he said EMC was the first to introduce feature/function. A couple of other corrections: the days of XRC/PPRC/SRDF rollout were a blur of two vendors trying to roll out function that was a little prior to “prime time”; it’s hard to say who beat whom here since it was all happening at the same time. Also, my position is that EMC’s TimeFinder was really just a reaction to the availability of STK’s Snapshot on the Iceberg and IBM RVA after IBM went on a marketing blitz for it.
BTW, before I go further, I do strongly recommend reading this white paper. Josh has done a lot of research and his conclusions are well worth noting.
Anyway, my passion over the past few decades has largely been storage architectures. It still is. And when I look at what is available today from all the major vendors, I’m amused, surprised, and sometimes shocked at the numbers of different architectures available, obviously some better and more robust that others.
Architectures are at the heart of everything electronic we have in our lives. As an example, OS’s have an architecture. Solaris, HP-UX, AIX, Windows, and z/OS all have different architectures. Windows, as an example, can run on Dell, Lenovo, Acer, Gateway, HPQ, and a myriad of other hunks of hardware. Similarly, that hardware can be powered by Intel or AMD. My point is that “architecture” in this context has little to do with hardware. Another example, if you’ll allow me to be biased, is EMC, which although they’ve changed the hardware (basically, chips and wires), their architecture, in my estimation, has NOT changed in the last couple of decades. As long as they cling to static cache assignments and “bin files”, my argument is that the underlying “architecture” has remained unchanged (evolved and improved, yes) since the old Mosaic 2000 days (such a quaint name now that it’s 2009). Changing the body parts does not change the architecture. VMax may change this, but little is known about VMax so it’s too early to tell (for me, at least).
At this point, I will expect (and will entertain) the cadre of EMC loyalists to take me to the mat on this, but I’m publicly voicing an opinion I’ve had for some time. And to reiterate strongly, I am NOT saying that EMC has not made improvements over the years in function and performance; I am saying the architecture has not materially changed. This limits their ability to roll out advanced feature/function which is one of the points Josh makes in his white paper.
I think it was Larry the Cable Guy who said the 47.2% of all statistics are made up on the spot. Well, I’m gonna say that 81.3% of those of you that read the white paper will like what it has to say, or at least learn from it. Try it out…
Comments (2 )
Architecture evolution vs revolution pretty much summarizes how IT technology today has progressed over time. That not a bad thing – Its good. The Unix’s and Linux’s today represent evolution of over 30 years of OS development. The industry today does experience tipping points as hardware vendor relationships and and hardware development evolves – IBM’s slow development cycle and mainframe/server priorities led Apple to abandon the PowerPC desktop chip. Not sure why you pick on EMC. EMC’s Symmetrix platform has continued to evolve over the years… pretty much like the rest of the industry.
The true reason for EMC’s V-Max development is reducing the cost
of hardware in a huge manner without fundamental changes of the Enginuity firmware.
I think that the frontend, cache and backend portions of the firmware are now running as threads under the umbrella of a Linux operation system. This is supported by a virtual global cache which is the aggregation of the real local caches of the V-Max engines. The implementation is based on a RapidIO dual fabric and special ASICs for cache access coordination. The V-Max architecture seems to be not really new, it reflects the cache coherent Non Uniform Memory Access ( ccNUMA ) architecture found in some symmetric multiprocessor ( SMP )designs in the past. It should be expected that EMC will use use the “cheap” V-Max design for future CLARiiON and Celerra systems.