United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

What’s more cost effective than a $30k Virtualization engine?

by Hu Yoshida on Oct 30, 2009

SearchStorage ANZ’s Simon Sharwood posted an article referencing a NetApp presentation which was posted on a public RSS feed that NetApp provides for its user community.

According to Sharwood’s article, the presentation was dated 2008 and published October 28, 2009. It was a marketing presentation that gave guidance on selling NetApp’s V-Series storage virtualization product as well as an assessment of their competition.

From the assessment of the slides, NetApp’s strongest competitor for the V-Series product was HDS and the reason given was price. That may be surprising since a V Series can start at $30k while a USP V or VM starts at over $100k.

So how can a USP V/VM be lower in price than V Series?

The difference lies in the architecture of the USP V and its ability to scale up to meet the increasing workloads of server and storage consolidation.

One of the main reasons for storage virtualization is server and storage consolidation. Consolidation requires a storage system that can scale up and that requires a tightly coupled multiprocessor virtualization engine.

The V Series is essentially a gfiler, a NetApp head that sits in front of other storage systems. It is a single processor that does everything including the mapping of block I/O to their WAFL file system onto external storage. Any additional functions like snap copies and dedupe takes cycles from this processor and more memory and CPU power is required to run the ONTAP operating system.

The USP V starts with 32 processors which are tightly coupled through a single 64 GB global cache with write protection. It is a multiprocessor, where some processors handle the front end ports, others the backend ports, and still others handle replication. An internal switch can switch path connections to the data cache providing high availability and performance. A separate control store handles the meta data mapping for dynamic tiering and dynamic provisioning without impacting data cache performance.

So, in order to compete with the USP V for any significant workload with equivalent performance NetApp will have to sell many more V Series Filers. Each additional V Series increases management cost, software licensing and maintenance, additional network ports and network management, power/cooling/floorspace.

This is a good reminder that companies who require scalability and performance should take a hard look at their needs before buying. As we’ve seen here, the list price of a system doesn’t always mean final price in the end.

Related Posts Plugin for WordPress, Blogger...

Comments (3 )

Biju Krishnan on 06 Nov 2009 at 11:29 am

Yoshida San,

There is no system that can beat the HDS USP-V in features. There are many who offer similar features but thats merely on the catalog. One should not forget the pain involved in administering a solution, since this attributes to longer TAT in daily operations.

So virtualizing a SAN device with a Vfiler is like adding an additional layer of management. We had this problem when the netapp logs shows an error on a netapp disk which is from a SAN encloure. Imagine the time we spent correlating all this to find the root cause. Precious man hours wasted, and add this to the OPEX cost since I was paid by the hour :)

Having said that I still would like to HDS team to put more efforts post sales to help customers at no extra cost (during AMC). This can save the product image from getting degraded due to wrong configuration by the customer.

Chris on 09 Nov 2009 at 5:43 pm

Hu, well said. ROI for a solution is about the solution as a whole, and not one aspect or particular function. Enabling multiple processors to segregate and scale individual workloads is the essence of Enterprise computing. The ability to scale each process independant of other processes makes it easier to spend money where it counts.

Hu Yoshida on 10 Nov 2009 at 7:59 am

Thanks for the comments, Biju and Chris.
We need to remind our customers that enterprise storage must scale up to meet the needs of the servers and networks as they scale up with multi-core processors, multiple OS instances, and increasing network bandwidth.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site