You have got to be kidding me – UPDATED
by Michael Hay on Jun 27, 2009
I recently ran across this post on Storagezilla. Well I have to say after having looked at their approach to file tiering I have to say nice try. As I’ve talked about in the past I was there when an external Acopia (now a product of F5) reference customer basically stated they would throw out Acopia in a second if a NAS vendor would put file system virtualization and migration into the NAS device. So with the approach that EMC has taken a customer would need to buy an entirely new product to perform the data movement off of the Celerra.
When we look at how HNAS can either tier internally or off to external systems using NFS this is a far more robust solution to the problem because there are fewer components. Add in the Hitachi Data Discovery Suite and you get an advanced ILM offering that uses full content indexing, federated searching and distributed task management to fulfill either internal or external tiering. Further I feel justified in saying that this is the right approach as all you have to do is look at the most successful approach to block virtualization which is within the box virtualization and data movement that the USP-V and the NetApp vSeries both employ. We’ve shipped thousands of controllers that take this approach. With our HNAS, we’ve intentionally copied a winning strategy, see the image below.
As to how we compare to the competition here see the image below comparing the two approaches.
Like I said you have to be kidding me EMC is basically using RAINFinity to emulate the copy command!
Storagezilla Mark, posted an interesting comment on this post. Since there might be some question as to where I got my facts from, here is the location which is publicly available on EMC’s site: http://www.emc.com/collateral/hardware/specification-sheet/h1685-celerra-filemover.pdf. Note I would post a screenshot from the materials, but I don’t want to violate EMC copyright. That’s why I drew the pictures above and added to the post. I’ve asked Storagezilla Mark for a publicly available document, a tcpdump or other factual evidence that proves his point and at that point I’d be most happy to make a correction, etc. Until then my logic stands!
Also he inserted an implied question on BlueArc and I answered that in the comments section.
Comments (8 )
To paraphrase a line from Annie Hall:
“I’ve heard what you’ve been saying and you know nothing of my work.”
If you had read the post you’d have seen that it is Celerra File Mover which does all the data movement. It can do so in everything from primary storage to linear tape. In my example I used Rainfinity as the policy engine as it’s an EMC product but there are multiple third party OEMs who support File Mover as a migration technology since the functionality was added to DART back in 5.3.
I’m not sure what NAS product HDS would have been selling back in the 5.3 days since their NAS partner appears to change like the seasons but it might not have been the current BlueArc rebrand.
Okay Annie, where I get my information from is: http://www.emc.com/collateral/hardware/specification-sheet/h1685-celerra-filemover.pdf page 2. This PDF clearly shows that the migration I/O is handled by the external ISV software such as well RAINFinity. Sure RAINFinity can be a virtual appliance, but unless the documentation is inaccurate or damn misleading the I/O is not handled by DART directly. If you would like to point me to another publicly available document, a tcpdump, or other evidence I’ll be happy to update and correct until then you statement is well factually inaccurate.
As to BlueArc as I’ve explained our relationship with them is vastly different what we’ve had in the past. We have an equity stake in the company, we have engineers at BlueArc’s facility in the UK with source code access, and we enjoy a very strong relationship — heck Shmuel is my personal mentor.
As it turns out, you are both correct. When archiving files from the Celerra production file system the data does flow through the policy and archiving application. When it is read back to either land in the Celerra production file system again or to just pass through in transit to clients, the Celerra will either read the data directly from the archive store if the archive store is accessible via NFS or CIFS, or read it through the policy and archiving application if the archive store is something like a Centera or tape or anything else. It seems the whitepaper could do with improving to make this clear.
Chris, thanks for the comment. Please let me know when you have updated the document and I’ll update the post here with the revised URL.
The document you referenced tries to explain this in the second paragraph on page 3 where is says “When clients request a file that has been migrated to secondary storage, the Celerra system will access the file directly from secondary storage to satisfy the client request.”
Tanya also points this out at about 3:25 into http://www.youtube.com/watch?v=lS_RuYuxyww – The arrow Tanya drew should have gone through the Celerra to the client.
Admittedly neither of these sources point out the data path difference between NAS as secondary storage vs. anything else.
Chris, I understand. When do you imagine that the points can be clarified or new material will be posted so that I can update the post accordingly?
[...] Michael Hay » Blog Archive » You have got to be kidding me – UPDATED [...]
[...] one of his previous blog entries, Michael Hay of Hitachi/HDS eloquently addressed file system virtualization and migration embedded [...]