United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Yet another storage Blog and how many angels can dance on the head of a pin?

by Hu Yoshida on Oct 20, 2005

The best part of my job as CTO of Hitachi Data Systems, is to meet with customers, developers, vendors, regulators, and analysts to share ideas and try to understand what is and will be required to store, secure, and access data. To gather this data I have become the unofficial Chief Traveling Officer for HDS. In order to be more productive in gathering this information and communicating with a larger community, I have decided to launch this blog to share my observations in the hopes of getting your feedback. Any thoughts on storage, data, content, and yes, information as it relates storage, data and content is welcomed and appreciated.

To start this conversation I  begin with the following observation.

At one time the great minds argued about how many angels could dance upon the head of a pin. A similar question today is how many terabytes can be managed by one full time equivalent. Toward the end of the 20th century, the storage pundits, were offering numbers like 5 to 10 TB per FTE. Others were putting this in terms of dollars with guidelines like $3 to $5 for every $1 spent on storage hardware. This stuck in most of our minds, that managements costs would be increasingly more costly than hardware, since storage costs were declining about 30 to 35% per year and storage capacity was growing at 30 to 50% compounded. More capacity meant more management, and more management meant more people, and the cost of people was expected to increase. These numbers were used to justify the need for investments in SAN, in larger monolithic storage arrays, and more intelligent software to offset the cost of management.

Recently I met a CIO who showed me numbers that contradicted this line of thought. His numbers showed that his storage hardware costs were 2/3 of his total cost of storage! This completely contradicted the conventional wisdom that hardware was only 1/3 to 1/5 the total cost. Was he one of the few successful CIOs that had gotten his storage costs under control, through the use of SANs, large monolithic arrays, and intelligent software?  Unfortunately,while he had done all these things, his growth in storage hardware was becoming irrational. It did not track to his business revenue. he was at the knee of that compounded growth curve where the rate of growth was headed straight up.  Rather than having his storage growth under control, he believed that his storage growth had become irrational. He summed up his storage problem in four sentences. 1) He was buying too much storage with growth rates over 100%. 2) He was paying too much with over 70% of his storage in high priced monolithic storage. 3) His utilization of storage was too low at about 20%. 4) And because of his irrational growth he was running out of power, cooling, and floor space in his data Center.

Was this an anomaly or were there others CIOs facing the same predicament? Since then I have talked to a number of CIOs and IT directors who have told me that they are managing their storage with the same people they had four years ago. If they had 100 TB four years ago and they had a 50% CGR, they now have over 500TB today and are attempting to manage it with the same number of people. Their budgets have been flat and it was too costly to hire and train new people. It was more cost efficient to just buy more storage. Some large accounts say they are managing a peta byte with 4 people. Many are growing capacity so fast that they have a standing order with their storage vendor to buy their latest and largest storage system every quarter. This a sure way to fall into the irrational growth spiral.

How does one rationalize the growth of storage? The first thing that needs to be done is to understand the data requirements with a comprehensive storage resource management tool. One of the hottest selling products recently for HDS has been the HiCommand Storage Services Manager product. Many companies are turning to this tool to determine their real storage requirements. The second requirement is a storage platform that can virtualize the storage platforms and enable dynamic migration for consolidation and technology refresh, to  increase the utilization of storage and reduce the requirements for power, cooling and floor space. The third is the ability to attach lower cost tier 2 and 3 storage behind a Tier 1 control unit to eliminate costly monolithic storage capacity costs and still have the benefit of tier 1 functionality.

What are your thoughts on the growth of storage versus the ability to manage it? Have we become so efficient that we can manage 200 to 300 TB per FTE? Are we sitting at the knee of the growth curve where growth becomes irrational? What tools do we need to manage compounded growth rates of 30%, 50%, 100% and more? Are there other CIOs who are in the same situation of irrational storage growth?

Related Posts Plugin for WordPress, Blogger...

Comments (5 )

Staffan Strand on 26 Oct 2005 at 8:02 am

Hu, congrats on your blog being online! I look forward to reading more intelligent comments on storage!

Chris Evans on 31 Oct 2005 at 12:32 pm

Indeed the growth in storage is becoming a management headache. Unfortunately none of the tools out there today are capable of delivering the ability to manage large heterogenous environments. To be honest I don’t think virtualisation is the answer to this problem either. The availability of cheap(er) storage has introduced a lack of discipline in managing the available storage resources.

We need to get back to basics. Implement tools which effectively deliver chargeback, showing allocated versus used, against business unit and show historical growth. Ensure the business user understands the true cost of the storage they have allocated.

Develop the existing deploy tools so that they not only show allocated versus free, but allow reservations to be put against requested storage and to show when storage will be returned based on a timeline, again to help manage growth.

Rob A on 01 Nov 2005 at 3:01 am

An excellent article and useful to see where technologies are moving.

Chris needs to read the artile again and understand the opporunites that virtualisation brings in terms of regaining control of all those unmanageable storage entities scattered around datacentres. This allows storage management tools to work more effectively on deployed storage rather than having a complex mix of multi-tier storage systems managed separately.

Chris Loringer on 07 Nov 2005 at 7:28 am

Hu, Congratulations on the Blog. I found it as I was doing some background research for an interview I did with Jack Domme. Now that I know you have your own “soap box,” I will be sure to pop in and see what you have to say.

Warm Regards,

Chris

Powell on 10 Oct 2006 at 5:54 am

Your blog is very much appreciated – Getting to know the views & thoughts of a Storage domain veteran is not something we can easily get.I’ve been a constant reader of all your blog updates & every update is a compressed book in itself having so many forethoughts & giving many futuristic ideas.

I applaud your writings & time you spend for writing this blog – I will remain a Fan of your blog with a request please write more about where this industry is going & whats going to be the trend for coming years.

Storage Area Networking Tutorials & Videos
http://storage-jobs.blogspot.com

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site