The Mythical FTE per TB
by Hu Yoshida on Jun 4, 2010
Full Time Employee (FTE) per TB used to be a measure of productivity for storage managers. Some people still use that metric today. I submit that FTE per TB is no longer relevant today.
For the last 10 years the mantra for IT has been “do more with less”. Ten years ago I would visit Data Centers where 2 people would be managing 20 TB. When I visit that shop today I am likely to see the same two people managing 500TB. Some data center managers boast of having one person manage a peta byte or more of storage. Does that mean that people have become more productive? Have advancements in storage management tools reduced the need for storage managers? What has changed in the last 10 years?
One difference is that storage has become a lot denser and cheaper. You can buy a 2 TB SATA disk for less than you paid for 9 GB FC disk ten years ago. The other difference is the introduction of Storage Area Networks which make it possible to network more servers to larger capacity storage frames than you could when storage was direct attached. Ten years ago 20 TB would have been spread across 20 different storage frames. Today 500 TB can be contained in two or three storage frames that are SAN connected to hundreds of servers.
Cheaper storage means that you can throw a lot more storage at an application and hope that you can reallocate the excess to other applications on the SAN. So a storage administrator might be responsible for a peta byte of storage, but how much of it is he really managing efficiently. One way to address storage requirements is to throw a lot of cheap capacity at it and delay the consequences.
Another thing that changed was the rapid adoption of the internet which has created a global marketplace. Most corporations must now be available 24 hours a day, seven days a week. We cannot afford to have any down time for our applications. As more and more applications are attached to a denser storage frame, the down time required to do a device migration becomes an increasing problem. Today there may be 100 applications on a storage frame. How do you migrate the data to a new storage frame without disrupting the applications? In order to minimize the down time we steal some time on weekends and migrate 5 or 6 applications at a time and end up taking 6 months to do the migration. You can do this with one FTE or 10 FTE and it will still take 6 months since it is no longer an FTE problem. It is a scheduling problem. The best solution for a scheduling problem is storage virtualization. By separating the application view of data from the physical storage, migrations, moves, and copies can be done without disruption to the applications.
FTE per TB is no longer a good measure of productivity and should not be used to measure the efficiency of your storage administration. Hitachi Data Systems offers a resource called storage economics which helps to identify the total cost of ownership. This is the only way to evaluate your efficiency. Storage economics can also help you map technologies like virtualization against those costs and quantify the cost savings of these technologies in your environment. For more informtion on storage economics, check out David Merrill’s blog at http://blogs.hds.com/david/
Comments (3 )
I think the biggest driver in storage management efficiency has been the push to reduce cost in the IT environment and the vendor’s response to it. I’ve had nearly 10 years experience in manging storage, mostly Hitachi, and I am sad to say that I don’t think Hitachi have quite got it when it comes to the software management experience.
I don’t care whether I need to manage 10 storage frames or one storage frame. I don’t care whether my arrays are SAN attached or not. I don’t even care whether I’m running Windows, Linux, a mainframe or anything else. What I want is a single unified storage management solution, that simplifies and standardises my approach to storage management and dumbs it down. Why can a single windows administrator deploy hundreds of operating system instances to a virtualised server farm using a single point and click console when to deploy storage to the same server farm I have to use at least 4 different pieces of software from Hitachi? Each with a slightly different interface and each with it’s own problems and shortcomings.
As less people are called upon to manage more storage, automation, standardisation and a reduction in complexity will be key to ensuring that the tools continue to meet the challenges ahead. What are Hitachi doing about it?
We hear you and are working to improve the usability of our storage management software. Our first focus was on functionality then in Version 6 of our Hitachi Storage Command Suite we concentrated on integrating the software components so that the resource management software, Tuning Manager, could consolidate the capacity and performance information of the element managers, Device manager and Link Manager, and pass that information on to the policy managers, Tiered Storage Manager and Replication Manager, so that we could automate the management of the storage system based on policies.
We also added a Portal which organizes the infrastructure information for a business unit and presents it through an interactive dashboard through which the business owner can monitor his service level objective, the health of his allocated storage arrays and ports, and his utilization over a given time period. That portal uses Adobe Flex, with drop down menus, drag and drop, and other interactive improvements.
You can expect us to focus on usability in the next version of the Hitachi Storage Command Suite and use many of the features that are presented in the Hitachi Command Portal.
And may the gods hear my prayers that it will never again be Java based……………