Flash, SAS, SATA, and Hitachi Dynamic Tiering are Hot!
by Hu Yoshida on Oct 19, 2010
In the beginning of this year, I posted a blog entry on New Considerations for Tiered Storage where I talked about the benefits of using virtualization to dynamically move volumes between tiers of storage and Dynamic Provisioning to thin provision and provide wide stripe performance to optimize the use of different tiers of storage. In that blog post, I cautioned against the frequent movement of volumes across tiers of storage because of the time and resources required to move a large volume. Last month, we announced the Hitachi Virtual Storage Platform (VSP), which solves the problem of volume tiering with a new feature, Hitachi Dynamic Tiering or automated page level tiering.
Leveraging dynamic tiering for Flash storage
When a volume is allocated to a Hitachi Dynamic Tiering (HDT) pool of storage, it is allocated in 42 MB pages. The HDT pool can contain up to three tiers of storage and automatically move pages between tiers of storage based on access counts. Tier 0 can be Flash drives or RAID 1 high performance SAS drives, Tier 1 can be SAS drives in a RAID 5 configuration, and Tier 2 can be large capacity SATA drives in a RAID 5 or RAID 6 configuration. This enables a volume to span multiple tiers and have the hot pages in the volume reside on higher performance tiers while the inactive pages in the volume migrate down to the lower cost, high capacity tiers.
Volumes no longer have to be migrated between tiers of storage for optimum performance and cost. The volume stays allocated to an HDT pool while the pages in the volume are moved to the right tiers of storage based on I/O activity. An HDT pool also acts like a Hitachi Dynamic Provisioning pool in that it also supports thin provisioning. With the VSP, we also introduced the use of 6Gbps, Serial Attached SCSI (SAS), to replace Fibre Channel (FC) loops for attachment of storage media and Small Form Factor (SFF) 2.5” for Flash and SAS disk drives.
The advantages of SAS over FC for Flash drives is the faster speed– 6Gbps for SAS versus 4Gbps for FC — but more importantly, it is the fact that SAS is a point to point interface while FC Flash sits on a FC arbitrated loop. This means that a FC Flash drive must arbitrate with the other drives on the FC loop in order to do a transfer. Flash drives are very fast so if they are mixed on a loop with slower disks, they could drown out the loop. This is not a problem with SAS, point to point links.
The importance of page level tiering
Our competition has been touting the use of Flash disk for some time, implying that Flash disks will replace hard disks in the near future. That has not happened. Their implementation of Flash has not taken the market by storm, mainly because of the price gap between enterprise Flash and enterprise HDD and their inefficient use of flash drives. They do not provide page level tiering and while they have a form of thin provisioning, it is not often used with Flash. Their new architecture is also designed for FC Flash drives and they appear to be stuck with that for the next few years, while we have moved on to SAS.
The price gap between the current Flash technology for enterprise storage and enterprise disk technology will be difficult to close. While Moore’s Law is often quoted as the reason for technology costs to decline, Moore’s Law has an important corollary. That corollary is volumes. If you don’t have the volumes, you cannot drive down costs, and the volumes for enterprise SLC NAND Flash is limited compared to the consumer market which uses MLC NAND technology. If and when we get a solid state technology that can be used in the consumer market as well as the enterprise market, then we will see the volumes that could drive the costs down to the level of disk drives.
However, the higher cost of Flash is no longer an inhibitor to benefiting from Flash performance. With HDT we can create a combined pool of large 2TB SATA disks, a few 300GB SAS disks, and a fewer number of 400GB Flash drives at a lower cost than an equivalent capacity pool of 300GB SAS disks. With page level tiering, all the volumes assigned to this pool can have the benefit of Flash performance if their activity warrants it. On the average, 80% of the IOPs will benefit from Flash performance. HDT makes it possible to enjoy the performance benefits of Flash at less than HDD costs due to the fact that most of the data in a volume is inactive and will migrate to the lower cost SATA tier.
With these new features, HDT for automated page level tiering of a volume, 2TB SATA to offset the tiering cost of Flash drives, and SAS to eliminate loop contention, it now makes economic sense to use expensive Flash drives for Tier 0 storage. While there is an additional cost for HDT software and Flash drives, these costs are more than offset by the lower cost of large capacity SATA disk and the economic benefits of application response times.
Comments (8 )
Are we going to see Dynamic Tiering on AMS2000?
We are in the process of evaluating the AMS line as a solution for our Enterprise SAN. The notion of HDS dynamic tiering is very appealing, but the marketing materials from HDS are frustratingly vague about what platform requirements exist for HDT. The White Paper “Dynamic Storage Tiering: The Integration of Block, File and Content” published in September further confuses the issue by suggesting NAS is an integral (required?) part of this solution.
Hitachi needs to be very specific regarding what platforms do and/or will support HDT in order to avoid the appearance of pushing vaporware. We are on the cusp of making a large SAN investment and Compellent is the only vendor so far that demonstrates a proven dynamic tiering architecture. HDS would be well served by clarifying its implementation so customers know exactly what’s needed to implement HDT.
I am sorry that you found the information about our Hitachi Dynamic Tiering confusing. Hitachi Dynamic Tiering was announced as a new feature on our Virtual Storage Platform and is not available on the AMS platform. Currently it uses tiers of storage that are internal to the VSP with plans to support external storage in Q2 of 2011. At that time, the AMS and storage systems from other vendors could be attached to the VSP and participate as a tier of storage in an HDT pool that spans Internal and external storage on a VSP. NAS is not an integral part of the HDT solution. The white paper you reference http://hdsnet.hds.com/intradoc-cgi/nph-idc_cgi.exe?IdcService=DISPLAY_URL&dDocType=ContentAsset&dDocName=01_065936
shows that files can be allocated to an HDT pool for dynamic tiering in addition to policy based tiering which is available with our HNAS solution.
I hope this helps clarify the use of HDT. It is only available on the VSP.
Very informative. Blog has key technical information on the new product which we need know, learn and implement without analogies!
I think we are kind of in the same boat. Dynamic tiering is a really appealing feature, but VSP is kind of too expensive. I had a small investment on an AMS2300 and was really hoping to see Dynamic Tiering on it and expand on it. Hu just gave us the answer we don’t really want to hear. I guess I would need to look somewhere else. Currently, for mid-range array, Compellent, HP/3PAR and EMC Clariion/Celerra support Dynamic Tiering.
Hope to see HDT in the next reversion of AMS release (AMS3000 ?).
Reminds me of the old HP Autoraid technology all over again.
Will solve the problem with aging / least used data being part of Tier 2 disks.
This article was very informative, i am working on implementing a similar solution but have an challange. The Organization has Application tiers which define RPO and RTO for each app.Though you explained well how Dynamic tiering can meet performance requirements at lower costs, I had an question regarding dynamic tiering vs reliability.In Dynamic tiering any inactive data (regardless of app platinum or app bronze tier)will move across the storage tiers. How do i meet the application tier RPO and RTO across different tiers of Storage. I am sure the RPO/RTO and MTBF of Flash disks, SAS and SATA will differ.This would mean a platiunm class application will suffer the same failures as an Bronze class application. I see it as a trade of between cost and reliability. Please do mail me your response
We, along with our customers are desparately looking for sub LUN tiering feature in AMS. Is it there in road-map in near future? Any rough information will help to see wheter customers can afford to wait.