United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Time to Focus on New Technologies and Storage Computers

by Hu Yoshida on Mar 9, 2011

long_road_ahead_ii_by_tumbIDC and Gartner have made it official.

To no one’s great surprise, 2010 was a great year for storage.  All the storage vendors had positive revenues and we saw billion dollar bidding wars for storage technology companies with barely $100 million in revenue. While storage capacity growth was impacted during the 2008/2009 downturn, there was no decline in the demand for data growth and 2010 was a catch-up year for storage capacity. In 2011, it is time to focus less on acquiring capacity and focus more on the new technologies that will be needed for the next 3 to 5 years.

These new technologies include the integration of storage and server virtualization where more workload for the movement, formatting, locking, encrypting, shredding, provisioning, reclaiming, and searching of data will be offloaded from the servers, hypervisors, and file systems to the computers and memories within storage systems. This adds a new dimension to storage systems, which will require them to evolve from being commodity storage containers to high function enterprise storage computers.

This trend was recognized by Dave Raffo of SearchStorage in his March 4, 2011 summary of the IDC quarterly disk tracker research. In “Big data storage systems rallied in 2010”, he notes that high end systems finished 2010 with a 30.2% market share.

Tiered Storage is Key

The need for intelligent tier 1 enterprise storage systems has returned but with a different twist. Now, instead of buying a fully configured enterprise storage system, it is possible to buy just a fraction of the expensive tier 1 storage and fill the rest of your capacity needs with lower cost tiers of storage without sacrificing the functionality required by storage computers. This is enabled by storage virtualization, which is just one of the many features of a storage computer.

The lower cost tiers of storage can be internal or external modular storage systems. It can be existing storage assets that still have a useful life.  As disks get to be multiple TBs and we have more efficient ways to groom them, maybe we don’t need to swap them out every three to five years. Maybe we can extend their economic life to five to seven years or more as long as we keep the intelligent front end current with the latest technology.

This will change the way analysts like IDC and Gartner track enterprise and modular storage revenues. Enterprise systems might be 20% to 40% tier 1 with 80% to 60% tier 2 and 3 modular storage. There will still be storage vendors that will sell fully populated enterprise systems without virtualizing modular or legacy storage systems behind them, and they will show a greater revenue stream than tiered enterprise systems. It will also be hard to tell standalone modular storage revenues from virtualized modular storage pools.

However that will be measured by the analysts, customers will have the benefit of enterprise storage computers with a blended cost tier of sustainable capacity.

Do you agree or disagree with the direction towards towards enterprise storage computers? What benefits do you see them providing you?  I’d like to carry the discussion into the comments.

Related Posts Plugin for WordPress, Blogger...

Comments (5 )

Mauricio Daher on 17 Mar 2011 at 11:22 am

Hi Hu,

I’m glad you still have a job (LOL-previous commentary). My comments here have more to do with HDS’s direction in the past couple of years with respect to deduping capabilities within your storage. As you said, Tier2/3 holds 80% and growing in the enterprise. Do you deduping is an important feature to help customers face the explosion of data that continues expanding in their data centers? Other top tier storage vendors have embraced this, and some have done it better than others. Thank you.

Kind regards,

Mauricio Daher

Hu Yoshida on 21 Mar 2011 at 11:14 am

Hello Mauricio,

Thanks for your comment. Dedupe is an important tool for capacity optimization and there are many solutions that are in the marketplace. The most effective use of deduplication is with backup data and archives. I will follow up on your comment in a post to my blog in the next week.

Sindhia Naidu on 23 Mar 2011 at 5:23 am

Hi Hu,
Look forwarding to seeing HDT implemented in modular. Within the small(modular) space, customer don’t have the budget or knowledge to implement application aware archiving. In these smaller environments power or space is not a major issues. Also there seems to little indepth knowledge on 2TB SAS drives any figures / whitepaper and/or blogs will be highly appreciated.

[...] of just using storage as containers of data, storage must now become storage computers with a global pool of processors that is separate from the port processors that handle the front [...]

[...] This topic will be approached in an upcoming series of blog posts, since the topics and measurement systems will be very diverse. Hu Yoshida talks all the time about the storage computer, most recently in this post. [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site