Tiered Storage PayOff
by Hu Yoshida on Mar 20, 2006
Some 40 to 45% of the USP platforms are sold with the Universal Volume Manager software which enables a USP to discover and access LUNs from external storage that is connected through its FC ports. Once this is done the USP can migrate volumes between internal storage and external storage and between different external storage arrays. This is the basis for dynamic tiered storage management. While some critics would say that this is on a LUN basis and therefore does not apply to data bases, there are other features like Shadow Image (snap shots) and TrueCopy (distance replication) that do apply to data bases.
There are always multiple copies of a data base that we spin off for various reasons. There are point in time snaps for backup, copies that we spin off to other business units for data mining, copies for development and test, and copies of consistency groups for disaster recovery. Not all these copies have to be on the same tier 1 storage as the original data base.
Computer World just did a writeup on Fidelity National who is using tiered storage to help eliminate backup failures. Fidelity National provides real estate services which includes insuring nearly 1/3 of the real estates titles in the U.S. An hour of down time in their storage networks means a direct cost of $4 million. However, the lost opportunity cost is incalculable.
Fidelity National keeps current closing properties in an Oracle data base on a Tier 1 USP 1100. 48 Million historic title documents are kept on a Tier 2 Modular 9585 system for fast recall. Another 9585 is used as a Tier 3 virtual tape drive which then streams to a SpectraLogic tape drive. They also mirror Tier 1 storage between Chicago and Little Rock using a combination of HDS and Oracle software products to replicate data between the two sites.
This combination of tiered storage pays off even in a very structured environments.
I have commented on my blog about this. I think we are running the risk of suggesting that multi-tiered storage has anything to do with hardware. A lot of storage is a lot of storage until you overlay it all with software: only then does it become “multi-tier.”
Seems to me that there is a lot of use of this metaphor to describe architecture in the distributed computing environment that have little to do with “multi-tiering” (at least in the way that the term was used in the mainframe world to connote three layers of memory, DASD, and at least two forms of tape: active-nearline and backup/archive).
I frankly doubt that there is any value to be gained from this metaphor in the distributed world. There, it seems to me, are two kinds of storage: capture (for immediate writes of data from applications) and retention (for long term storage of infrequently changing data).
My two cents.