United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

The Beat Goes On

by Hu Yoshida on Apr 3, 2012

Technical Deep Dive’s Nigel Poulton responded to my post and Twitter conversation on the relevance of Tier 1 controllers with a post of his own. In his blog, Nigel writes that he believes SSDs have changed the game and the need for Tier 1 controllers. He ends his post with this summary:

“If things don’t get better, the rising generation of arrays bearing the mark ‘Designed for SSD’ such as WhipTail, Kaminario, Violin Memory etc. will start to crop up in traditional Tier 1 accounts. These guys are working hard at implementing many of the traditional Tier 1 features into their products, features like NDU code upgrades, replication…oh and cloud integration.

Yes, Tier 1 is about robust replication, caching, N+1 or higher…but its also about performance. High performance storage is still a bit nichey today, but it wont be tomorrow. Oh and it’s getting late!”

Here is my response:

Nigel, thanks for posting this in blog format where I can consolidate my response without having to cut and paste a bunch of tweets. At my advanced age—yes you were close on guessing my age—I can’t remember the context from one tweet to another that’s buried among so many others.

I thought David Floyer did a good job of defining Tier 1 storage. That definition was about control functionality and not about media. SSD is about media.

There is no denying Flash SSDs are blazing fast compared to disk, but they are slow compared to DRAM, 25 microseconds for read and 1.5 ms for write/erase versus 10 nanoseconds for read and write. Flash is also not as durable as DRAM or HDD, with 105 for Flash SSD, 1018 for HDD and 1016 for DRAM.

So how do you protect SSD?

One way is to use RAID, like we do with disk. And you want to use it where you can isolate the damage, like on a RAID group. If you put it in memory, or as a cache, when you lose a Flash SSD module you lose everything behind it. For that reason we are not rushing into using Flash SSD for cache or memory. We have Flash SSDs in the VSP controller for downloading cache in the case of a power failure, but we do not use it as an extension of cache. There are future SSD technologies like Spin Toque Transfer MRAM (STT-MRAM) that could have durability approaching 1015, which is equivalent to DRAM and would be suitable for memory and cache. There is also PC RAM (Phase Change) which is also a future possibility with better durability.

Flash SSD for enterprise is still very expensive compared to HDD, so using Flash without page level tiering is prohibitive when you consider that 80- 90% of the data may not be referenced again after the initial burst of activity.

The mean life of active data is very short. You need a Tier 1 controller that has the horse power to support page level tiering along with all the other controller functions like replication, copy on write, migration, VAAI, monitoring, reporting and managing a complex multi-vendor storage environment.

As you acknowledge, Tier 1 is not all about performance and I agree with you that performance will become more and more important. We have to begin architecting controllers that are not optimized only for spinning rust. The architecture of VSP has undergone a major change from its predecessor (USP V). The frame work is being laid to optimize for the use of future medias.

Looking forward to continuing this conversation.

Related Posts Plugin for WordPress, Blogger...

Comments (3 )

Nigel Poulton on 03 Apr 2012 at 4:42 pm

Hi Hu.

Great discussion. A couple of points plus an update that I made to my blog post earlier today after some fierce reaction last night on Twitter (missed you at that party)..

First up, when I talk about SSD Im not referring to NAND flash specifically. All the other potential tech that you mention falls in to my remit of SSD. NAND Flash certainly wont reach the ripe old age of 50+ like spinning media, but solid state technologies will.

My intent was not to talk about SSD as a media, but an enabler to massively move the performance goal posts. Move them potentially out of the reach of VSP…???

Also, I really should have said mroe about the following – some of the features of traditional Tier 1 arrays such as remote replication and HA technologies are being diluted in todays enterprises where HA features are being omplemented further and further up the stack. Another factor I believe is weakening the stronghold of traditional Tier 1 vendors.

Tomorrows Tier 1 array may be a much cut down version of todays, implementing only core functions such as RAID, compression/dedupe, tiering, TP…?

Nigel

Hu Yoshida on 04 Apr 2012 at 1:11 pm

Hello Nigel, good to see you back on the blog. It is obvious from the comments on your last blog post that your fans, including myself, are very interested in your posts as well as your tweets. In December you and Rickatron (Rick Vanover) posted a podcast interview with Dr. Garth Gibson (the founder of Panases), who wrote the seminal paper on Redundant Array of Inexpensive Disks.

Rick starts the podcast with the prediction that SSD will take off in 2012 due to application demand. I agree that SSD will start to take off, but that will be enabled by page level tiering, which makes the performance of SSD available to the hot pages in all applications. That will require Tier 1 controllers that have the processing power to be able to do this without impacting the performance of all the other functions like snapshots, replication, etc.

I think there is a communication gap when we use the term SSD. SSD stands for Solid State Disk, which is media. What you are talking about is NVM, Non Volatile Memory, which can be attached on an I/O bus, as an extension to memory, and on a PCIe bus, and all of these implementations will be needed. As Garth Gibson explained in the podcast, there is a place for NVM at all levels, including as a LUN, and that will require a Tier 1 controller for enterprise reliability and availability.

Garth also said that market acceptance of technology requires high volumes for the cheapest price that can satisfy their requirements, and it must be evolutionary. SSD is not cheap unless it is used in a tiering model. It must be introduced in an evolutionary manner. Introducing NVM as storage media is an evolutionary approach, NAND SSD is relatively cheap compared to other NVM technologies because it has been adopted by the consumer space, where most of the volumes are. Unfortunately the introduction of more durable NVM will be difficult since the consumer space does not need durability that is greater than 10 to the 5th.

There are lots of interesting points that were made by Garth in your podcast. I encourage all our readers to download that interview. It would be interesting if you and Rick could pull out a couple of topics, like future of RAID and Shingled Magnetic recording that Garth introduced and expand on it in a blog post.

-Hu

Boris Berezin on 08 Apr 2012 at 3:44 am

Hello Hu!
April 3rd  you wrote about Storage economics and SSD, HDD. But IDC research shows that  ”Archiving projects are a key driver for storage investment in 2011, as organizations are using archiving solutions as part of their overall quest to tame data growth and increase the efficiency of their storage infrastructures.”
What can you say about usage of optical discs (DVD, BD, UDO …) and libraries in data storage systems?

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site