“Do More with Less”- is there any end in sight?
by Hu Yoshida on Jun 13, 2010
A new survey by Intercall shows that 48 percent of americans who use technology in their everyday jobs say that they are now required to do more work with fewer resources due to the current economic climate. As an example, nearly one third (30 percent) feel that they need to stay connected to work 24/7, even during weekends, breaks or holidays.
For the last decade or more the directive for IT has been “Do more with Less”. This cry becomes even more strident with every down turn in the economy, and even more focussed on storage as data continues to accumulate through good times and bad. Although storage demand continues to grow at about 60% per year, most IT shops have not hired any storage administrators for the past 7 to 10 years. Are we reaching the limit? Is IT on a treadmill with increasing workload month after month with no end in sight?
In my post on the “mythical FTE per TB” I pointed out that some Data Centers are managing over a 1000 TB per full time employee where 5 or 7 years ago each FTE was only managing about 10 TB. Although that sounds like a tremendous amount of productivity or doing “more with less” that hasn”t addressed the increasing cost of IT which continues to increase by 7 to 8% per year.
I think we are reaching the physical imits of the “do more with less” movement as it relates to storage administrators or FTE. Resource management tools and networking have helped with the consolidation of storage. However, storage is statefull and requires the physical movement of data when changes are made. While software can automate functions to be executed against storage, the execution has to wait until the data is read, written, moved, copied, replicated, compressed, deduped or formatted to spinning disk. The execution of a storage command is not just a matter of flipping a few bits in memory as you would with a processor command. It often entails some mechanical movement which takes time.
The only way that we can continue to solve the problem of productivity is to virtualize the storage and storage capacity, so that we can make the physical changes in the background while the applications work with a relevent subset of the total data. By storage and storage capacity virtualization, I mean the ability to attach heterogeneous storage and virtualize them behind a scalable storage system so they they look like a common pool of pages which can be managed on a page basis rather than on a volume basis. In this way you can dynamically provision, move, copy, replicate and migrate data on a more granular basis, a page rather than a volume.
With storage virtualization combined with capacity virtualization, we can make our storage administrators even more productive in reducing IT costs. This type of virtualization makes it possible for IT to do less while still doing more to reduce costs and increase productivity.
Comments (2 )
I think perhaps the units we manage storage in need to change. Technologies such as HDP are game changers in some respects. You no longer need to worry about the capacity of the underlying storage. I’d like to see HDS move the provisioning threshold up a notch. I don’t want to buy disks any more. I don’t really want to buy shelves. I want to buy storage based on certain criteria we define as to performance and capacity. For instance, we have a USP-VM with an AMS2500 behind it. Our storage provisioning will be a shelf at a time. When we get that shelf, the HDS engineers plug it into the AMS controller, unpack the disks from their boxes and install them, then configure the RAID types. Then we add the disk into HDP pools on the USP-VM. We have 3 RAID groups types; 7+7 450 GB, 2 x 6+1 450 GB, 10+2 (in the HD shelf) R6 1 TB SATA.
Why can’t HDS simply supply a type ‘x’ shelf. It’s got 15 disks (14 data, one hot spare). It’s got say 4 GB of cache and a couple of simple (think PCI-e card type stuff) RAID controllers. And it’s pre-configured as say 7+7 R0+1. or 12+2 R6. The SAS connection then simply connects to a SAS fan-in module that really just provides a path to the USP-VM. or a controller that just does HDP and expects to delegate the handling of the RAID stuff to the relatively dumb hardware in the shelf. It’s like connecting a whole lot of SMS (now gone) connected to a USP-VM. Then it could even automatically be configured by the HDP controller into the correct HDP pool.
Bit like an EMC VMX I suppose. Or an IBM XIV. But with SAS and low-level ASICs instead of commodity hardware.
An interesting and valuable perspective. However, I think we have a way to go before we get to saturation level with “do more with less”.
There is a lot that can be done with process automation in conjunction with technology automation that can yield significant gains for your average organization.
We are moving into the detailed planning up front era that will yield little effort on the back end of operation of complex systems.
Most organizations skimp over this important phase of setting up their IT environments and pay the price as each year adds more complexity to the equation. Even worse, those that do execute due diligence initially, promptly forget its a regular review event and fail to determine what strategy changes can yield the best return and squeeze margins even more tightly down the road. Nothing stands still.
SRM tools are where the magic sauce is going to be with storage, end to end in any IT environment, married with other automation tools whose process features will integrate with each other to yield savings of time and effort and thus financial resource.
IT is going become a big chess game whose yields and success will be determined by the feature gain yields that can be squeezed to the limit via automation processes.
In my humble opinion, we ain’t seen nuthin yet….!!
The time to be a master Ferengi is upon us………..