United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Frankenstorage Redefined

by Hu Yoshida on Apr 16, 2009

As defined by Chuck Hollis, “Frankenstorage appears to be — storage arrays, assembled from various parts from multiple vendors, brought to life by the magic of PowerPoint’s and press releases.”

frankestorage1Chuck was using this to deride the storage vendors who provide storage virtualization products, in particular HDS, IBM, and NetApp. He failed to recognize that storage virtualization has gone beyond the power points and press releases and has real market acceptance, and is accelerating with the current down economy. Customers are doing more with less through the use of storage virtualization. Hitachi has shipped over 11,000 units of the USP and USP V products through direct and indirect channels since 2004 with over 50% of those with external storage behind it. Ask the other folks like Barry W. or Dave H. and see what they say…

The Symmetrix V-Max announcement seems to fit the Frankenstorage definition to a T, and the Virtual Matrix architecture appears to be the mother of all Frankenstorage.

The Symmetrix V-Max is a very, very, large storage array, assembled from multiple vendors (off the shelf parts), brought to life by the magic of PowerPoints, press releases, blogs, tweets, and video meetings. EMC marketing did a fantastic job, using all the latest marketing tools and sizzling hyperbole. But, as I pointed out in my blog post on the announcement, there is one big glaring deficiency. Without storage virtualization V-Max cannot help customers do more with less unless they rip out their current investment in storage.

Without storage virtualization they have abandoned their DMX and Clariion customers who just brought products in the last few years and are stuck with static tiers of storage and the waste of over allocated, unused, storage. How long will it take to migrate data from a 1PB DMX4 to a 2PB Symmetrix V-Max without virtualization? There will be hundreds of applications that will need to be disrupted while the migration occurs. It may take years to do that migration using external software.

Fortunately there is a solution for that DMX4 user. He can take a slight disruption while the DMX4 is attached to a USP V. Once the connection is made, the USP V will discover the LUNs on the DMX4, present them through the USP V cache to the applications, and the applications can be turned back on while the data is migrated in the background. Once the DMX4 is presented through the USP V, it can benefit from common pooling, common management, dynamic tiering, dynamic provisioning, snap copies, and distance replication of the USP V.

You can not continue to build larger and larger storage arrays without the ability to dynamically migrate data across these storage arrays and that takes storage virtualization. EMC says that they introduce new products every two years. Do you want to go through migrations every two years when the capacity increases by a factor of 2 with every migration?

Related Posts Plugin for WordPress, Blogger...

Comments (14 )

Edward Branley on 16 Apr 2009 at 6:47 pm

when I teach USP/USPV replication, I also use the Frankenstein analogy. It was easy for the good Doctor to put the monster together, just as it’s easy for us to put multi-TB file systems together, but, like Frankenstein, our “monsters” often turn on us and kill us in the end :-)

the storage anarchist on 16 Apr 2009 at 6:58 pm

With DMX and V-Max, customers can take the same “slight disruption”, attach the existing storage to the new V-Max, and present the old LUNs from the V-Max, and turn the applications back on while the data is migrated in the background.


Moving up to 1024 LUNs simultaneously, at a data rate in excess of 2.5x faster than the USP-V.

It’s called Symmetrix Open Replicator/Live Migration. Been shipping for about 5 years. Since the ONLY reason most people use Hitachi Virtualization is to migrate data into their USP-V, OR/LM is basically an equivalent solution. Only better.

And if the customer has PowerPath 5.x installed, they can enable PowerPath Migration Enabler which will even make the insertion and cut-over to the newly relocated LUNs 100% nondisruptive – ZERO application downtime.

Like I said in my other comment (that you haven’t moderated yet), V-Max isn’t copying USP-V, it just does similar things BETTER, FASTER and EASIER…

Hu Yoshida on 19 Apr 2009 at 3:02 pm

Hello Barry, in Chuck’s original Frankenstorage post you did a great job in defining the benefits of storage virtualization.One of the benefits was Dynamic migration between different external storage arrays – not just between different media types in the same array as you do in the V-Max. Since EMC can do that with open replicator, why don’t you open it up and provide full storage virtualization like Hitachi does so that customers could realize all the benefits of storage virtualization that you described? I’m sure your DMX users would appreciate the ability to have the dynamic migration and Virtual Provisioning features of the V-Max without replacing the DMX that they just bought.
Is it because your architecture is not capable of this or you want your customers to buy new iron every two years?

Ron Singler on 20 Apr 2009 at 5:49 am

“And if the customer has PowerPath 5.x installed, they can enable PowerPath Migration Enabler which will even make the insertion and cut-over to the newly relocated LUNs 100% nondisruptive – ZERO application downtime.”

Don’t you have to reboot when you install Power Path? Have I missed something or isn’t that an interruption to the application?

the storage anarchist on 20 Apr 2009 at 2:59 pm

Hu – In my experience, customers rarely find it economical to utilize external storage in the manner you promote so vehemently. The practical reality is that fewer than 50% of USP-V customers use UVM, and the vast majority of them use it only to service migrations into the USP-V.

Everyone knows a Trojan Horse when they see one.

Now, I know that some actually do USE UVM full time, and hats off to them. But most figure out that the costs of maintaining old hardware and purchasing/maintaining all the extra FC ports and fabrics it takes to connect them to the USP-V doesn’t justify the benefits. Not to mention all the limitations of your “seamless” mobility – like having to stop replication sessions when relocating volumes, etc.

I hear from customers all the time that they prefer to utilize in-the-box tiering approach championed by Symmetrix, putting their lower-tier data on slower SATA drives (nearly a year sooner on DMX4 than they could on USP-V, by the way). In fact, IIRC correctly, David Merrill’s last post before his unexplained haitus actually showed an example where UVM virtualizing external storage actually costed MORE – I still have a copy if you’d like to refresh your memory (it seems HDS has removed it from David’s blog site for some reason).

Customers don’t have to buy new Symmetrix arrays every 2 years – EMC just gives them new state-of-the-art platforms to choose for their NEXT purchase, with forwards compatibility and backwards interoperability. In fact, the DMX3 and DMX4 offer the exact same software features (excepting HW differences).

Sure, if customers want the NEW features, they buy new – but you’ve done the same thing yourselves with USP –> USP-V and even the AMS. Pot & Kettle here again…

And indeed, the V-Max architecture enables all kinds of new features that you haven’t heard about yet – so stay tuned!

Ron – If PP 5.x is installed and in use, no reboot is required to enable PPME.

John F. on 20 Apr 2009 at 7:22 pm

Hi Barry,

I’m really trying to follow you here. I’ll restate this and tell me if I got right…

1. User powers down DMX.
2. User removes shelves from DMX
3. User powers down V-MAX
4. User installs shelves previously removed fom DMX in V-MAX (are they physically compatible? What about Clariion?)
5. User powers up V-MAX
6. User re-presents old LUNs from old shelves from V-MAX this time.
7. User reconfigures each host to connect to old luns now presented on the V-MAX
8. User starts some background migration process to move the Luns from the old disk to the new disk within the V-MAX.
9. Assume V-max repoints everything internally, no action required.
10. User powers down V-MAX.
11. User removes old DMX shelves.
12. User powers on V-MAX.

Voila! Rube Goldberg would be proud.

Is that right? That sounds awfully painful. What are the steps for the Powerpath migrator approach? What are the advantages/drawbacks of each approach? Is the Powerpath approach simpler/less disruptive? What is the comparative cost of each approach is hardware, software and services?



Claus on 20 Apr 2009 at 10:53 pm

Barry, you seem to think you know a lot about our customers and how they use our storage, but again, you’re way off the mark. I’m sure you do hear from customers that prefer the “tiers-in-a-box” approach since, surprise, they’re your customers!! But at least we offer customers an option, you don’t. Isn’t claiming the “we offered internal SATA first” defense a contradiction of your comment on my blog? What is interesting is that, in our customer base where they can choose where to insert SATA, they predominately prefer to use external tiering, not internal tiering. I think we made the right decision and you made the only “decision” available to you.

You seem to like to criticize limitations in our technology when you and your company don’t even have the technology to begin with. So you criticize storage virtualization when you don’t have it yourself. You criticize non-disruptive and heterogeneous data mobility when you don’t have it yourself. You criticize a limitation in our heterogeneous replication technology when you guys don’t even have equivalent technology. And you criticize the whole notion of storage economics because, well, it makes you look bad. You’re like K-Mart whining that Nieman Marcus doesn’t support the “Blue light special” standard. Give us all a break, here.

And finally, are you finally admitting that the V-Max announcement was a roadmap announcement, and not a product announcement? Is that what you meant by the “stay tuned” comment? Really, why would anyone want to “stay tuned” when most of these promises are already available today.

Great dialogue, Barry, let’s keep it up!! Seriously!!

TerryM on 20 Apr 2009 at 11:05 pm

In terms of full disclosure let me say that I work for a technical consulting company and we are partnered with Hitachi Data Systems. Early on I heard EMC supporters frequently knocking virtualization as an overall concept. Eventually EMC began to position their own virtualization solution and their battle cry turned to how virtualization should be in the fabric not in the controller. Now it seems we are back to EMC downplaying the value of virtualization as a whole.
Based on my experience, virtualization can be very valuable and HDS has the most fully developed solution. This is not to say that virtualization is appropriate in every case but I have found a number of places where it fits very well.
On at least two occasions we have had customers that needed to expand their existing storage environment. Unfortunately they did not have the necessary adjacent floor space to install the upgrade. They did however have space across the room. Using virtualization we were able to create a single storage environment and meet the customers’ requirements without replacing pr physically moving the current environment.
Disaster Recovery is another area where we have heavily leveraged virtualization. Many of our customers need to replicate the data from their production storage environments however they do not need the same level of performance at the remote site. Using virtualization and lower cost modular storage at the remote site meets this need.
The third place we use virtualization is to add tier 2 and tier 3 storage to the environment. While we could just use larger capacity drives internal to the enterprise array, we have found that it is more cost effective to either leverage existing midrange storage or to deploy new midrange arrays for this purpose. Obviously I would not suggest that it always works out that way, but in most cases it seems to because we regularly compete with EMC and when all things are considered our pricing is better.
I would also like to point out that virtualization does allow you to utilize new technologies more rapidly than you otherwise would be able to. For example it was correctly stated that SATA drives were available for the DMX before they were offered on the USP. What was left out was that I always had the option of virtualizing an array with SATA disk and leveraging the technology that way. More recently the new HDS Modular line was introduced that uses SAS drives and again I can virtualize these arrays. My point is that using virtualization gives me a lot more options. I can take advantage of more technologies, more quickly than I could without it and it is all done in a very reliable, high performing platform.

the storage anarchist on 21 Apr 2009 at 12:31 pm

John F. – Your 12-step program is abjectly incorrect. Apparently did not read my first comment above, or if you did, you totally missed the point.

Perhaps I communicated poorly.

Customers can migrate volumes into a DMX or a V-Max using exactly the same steps as they would use to migrate volumes into a USP-V. Symmetrix Open Replicator/Live Migrator performs exactly the same process as Hu describes above, with exactly the same “slight disruption” – or no disruption at all if they have PowerPath Migration Enabler installed.

Claus – I’m not here whining about anything. I’m here trying to help your readers get the facts.

OK – so I blew one…and I’m man enough to admit it. Gloat if you must, but remember, “First” only matters as long as it is also “Only.”

Ad the fact is: Hitachi doesn’t hold some mystical superiority over the world of storage simply because you’ve put virtualization of 3rd party storage into your array and nobody else has.

If that’s what customers want, they know where to get it.

And I’ve not argued that it is even bad idea, merely pointed out that economically it is not always as beneficial as you guys assert. Over and over and over and over and over and over and…

David Merrill actually admitted (blogged) that the economics don’t always work back in 2007, and that got him an 18-month vacation from blogging. So I understand why you can’t admit that virtualizing 3rd party storage isn’t always cost effective.

But my point is if customers simply want to migrate data into their new storage platform with a minimum of disruption – one that is better, faster and easier than UVM+TSM, then, well, you now have competition.

And with Symmetrix, customers often don’t have to sacrifice performance and complexity to get what they want. To one of TerryM’s examples, Open Replicator is routinely used to migrate and synchronize data to/from non-Symmetrix arrays. Native SATA support on both Symm and CLARiiON doesn’t suffer the read-after-every-write performance penalty that Hitachi forced on customers. And both V-Max V-LUN and Open Replicator can move/relocate more than a hundred times as many volumes concurrently than your vaunted USP-V.

See – I’m not criticizing because EMC doesn’t have it and Hitachi does…once again you’ve over-stated reality. Instead, I am trying to help your readers get a balanced understanding that although Symmetrix may do things differently, in many key areas that have direct impact to the bottom line, Symmetrix does them better, faster and easier.

And Claus…you and I both know that new architectural approaches by intent open new opportunities for the future. Let’s not insult your readers with the negative insinuations and hilarious claims that you’re already delivering everything anybody needs today.

Kinda contradicts the very notion of “great dialogue.”

John F on 21 Apr 2009 at 4:18 pm

Thanks Barry,

No power off/power onn at the end of the process. Please escuse me for my ignorance on the subject. If you could point me to a whitepaper detailing the process that would be great. Better yet, post the steps. If this is not the appropriate place, you know where to reach me. I’m still trying to get my head around all this; what the processes are, how much effort, and how much downtime they entail. I would definitely like to see the steps for the Powerpath migrator as well, especially if that route simplifies the process and causes less disruption

Thanks again


the storage anarchist on 22 Apr 2009 at 6:47 am

John (et al) –

These should satisfy your curiousity:



Hu Yoshida on 22 Apr 2009 at 10:31 am

Barry, I agree with you and with David Merrill that external storage virtualization may not always make economic sense in every scenario, but in most cases it actually does. At that time we had a different set of marketing people who made the decision to pull the post. It took us some time to correct this and for David to find time in his busy schedule to restart his posting.

Since then, the case for external storage virtualization was been strengthened by the addition of Dynamic Provisioning which can buy back 40% or more of allocated capacity, increase performance with wide striping, and reduce the time for provisioning new allocation requests.

While payback for additional software licenses can be covered through the CAPEX and OPEX savings of external storage virtualization, we realize that license fees inhibit many customers from attaching thier 3rd party storage to the USP V for virtualization. In order to help them overcome this concern, especially during this time of economic uncertainty, we have announced a promotion through the end of this year to provide a free license to virtualize third party storage. In addition we will provide a free 10 TB license for Dynamic Provisioning, free In-System Replication license and free Tiered Storage Manager license for the third party capacity that is virtualized.

Please see my blog post today on this promotion or the announcement letter.

John F. on 22 Apr 2009 at 1:53 pm

Thanks very much Barry. Looks like I have some reading to do tonight.

Se ya later.


Ramesh Rajan on 22 Apr 2009 at 2:28 pm


I would like to differ from your below statement on a technology approach Hitachi takes.

Native SATA support on both Symm and CLARiiON doesn’t suffer the read-after-every-write performance penalty that Hitachi forced on customers.

- A read after every write was introduced as a protective measure by Hitachi to make sure every write to SATA drives is processed successfully and verified. It will be ignorant to call it out if i am customer i will be mad if i lost any data.

Also you might have seen in the marketplace when we talk SATA we don’t talk performance.

In that sense, you can also explain why DMX arrays are not subjected SPC benchmarking if everything is alright with the performance of DMX series. I would be very interested to see if EMC can take right step in subjecting V-MAX to the independent benchmarking.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us


Switch to our mobile site