Defining Costs for Storage Tiers
by David Merrill on Apr 18, 2013
Over the last few years, it has become increasingly important to create storage service catalogs in order to align business requirements to technical storage architectures. Many organizations shy away from developing catalogs for a variety of reasons, one of them being the perceived complexity to create them. Many also tend to think that defining different tiers of storage is difficult, in that predicting exact or perfect classes of service has to be an exact science. People often ask me if there are best practices or published standards on these storage tiers, so that they can be used in a formal or informal catalog.
As far as I am aware, there are no published industry best practices on setting up storage tiers. The definition of tiers is different for each customer and industry around the world. One person’s tier 1 can be someone else’s tier 2.
Most storage architects do see some general patterns of these tiers, and I have outlined a highly simplified version here.
As I mentioned, every organization will have different tier definitions based on their applications and requirements. The above definitions are average or illustrative in nature. The key point is that the last row (total cost ratio) is how we provide real differentiation or separation of costs. This is not the price of the media for the tier, but the total cost of the tier over a several year period (annualized though).
- Hardware and software depreciation
- Hardware and software maintenance
- Labor for management
- Power and cooling
- Floor space
- All data protection costs (backup, Disaster Recovery)
- SAN and WAN costs
Since you want to move data to lower-cost-to-own tiers over time, this 11:7:3:1 ratio is important. If 3 of the 4 tiers are all roughly the same cost, then there are no options to reduce the cost of storage. Having multiple tiers at multiple/differentiated costs is a necessary incentive as we move to chargeback and pay-as-you-go (cloud) models. In the least, departments and IT consumers need to know what their storage tier selection really costs the company over time. This might influence different selections or assignments of IT resources.
The types of services and capabilities have to be designed in order to achieve this differentiating 11:7:3:1 cost ratio. If not, subsidies and tariffs may have to be introduced to artificially set the costs so that these ratios can be instituted. I don’t think there is anything wrong with artificial cost setting, since this is a tactic to drive the right behavior in storage data at the right tier for the life of the data asset.
Don’t be discouraged that you do not have a comprehensive storage catalog in place. I see many customers start with a simple matrix like the one above (perhaps put into a colorful and informative 1-page brochure format) to communicate differences in cost, performance, protection and availability. Communicating the differences and capabilities can start with some simple tables, and over time this format can evolve into a style or format that works best for your organization. Just don’t forget to include the costs for each tier.
Comments (2 )
Hello David. The model proposed above looks very similar to what we have been using for a while now and has served us well. One thing that is now adding complexity is that the tiers are no longer physically defined. We are seeing a greater use of QOS style management to deliver a service at the level the customer wants/can afford but may be delivering these differing levels of performance from the one physical array (with internal storage pools of tiered performance). Given that licensing is usually array chassis/capacity based it makes it more difficult to deliver the services with such a broad spread of costs. Is subsidising/tariffs the only way you might recommend to keep the cost models far enough apart for each tier?
Ash, thanks for the comment.
In order to get this kind of a total cost spread (7:3:1), we have to take a hard look at the services (QoS) that are being offered to provide real separation of service, capability, recovery, performance, protection etc. The price of the devices cannot give us that kind of separation, it has to come (mainly) from offering significant services. Subsidies and tariffs should be used to drive the right overall behavior. For example, Tier 1 should be expensive enough that only the worthiest of data types reside there, and then even for a short time. Storage economics has taught us that data has to age, and be demoted over time. The costs have to be commensurate with that demotion process.
If you are having to rely too much on subsidies and tariffs, I would suggest a deeper look at the service types that you offer, and the cost of good (CoG) that make up each virtual tier. Without artificial methods, your CoG should approach a 5:3:2 range without too much effort.