United States
Site Map Contacts Hitachi Global Community
Hitachi Data Systems Hitachi - Inspire the Next

Hitachi Data Systems Blog

Home > Corporate > HDS Blogs > HDS Bloggers > The HDS Blog
Products, Solutions and more

Data Center Advisors

Power to the SSD

Ken Wood by Ken Wood on Mar 16, 2010

me-designed-head-no-glasses

Power to the SSD

So I’ve been looking into the benefits of Solid State Storage Devices over Hard Disk Storage Drives. Personally, I’m more infatuated by the performance advantage than the power advantage, but both play in this discussion. In fact, one solves the problem of the other as I’ll try to show. If you do a search on “ssd hoax”, the first hit that should pop up is an article from Tom’s Hardware, here, dating back to June 2008. The initial experiment was to find out how much an SSD device extended a laptop’s battery life over a standard hard disk.

The initial title “The SSD Power Consumption Hoax” had to be updated with some “side effects” of using SSD storage and basically reset the conclusion or at least clarify it. The experiment looped through some benchmarks. The initial conclusion was that SSD actually performed equal to worse than the hard disks. When I say performed, this means the performance of the battery life. So, while the battery life was essentially equal to or worse with a SSD device configuration over the hard disk configuration, the SSD device configuration actually executed more tasks than the hard disk configuration. There are a couple of conclusion I make with this experiment’s findings. If I can do more work in the same amount of time or the same amount of work in less time, then that’s a huge advantage. It should also mean that my system could have more time to idle (more on this in a bit). Second, a system is busier doing actual work when not waiting for disk IO. This ultimately means less battery life as CPUs, memory, displays, etc. are constantly working and drawing power, but then again, you could be finished faster and start doing nothing sooner. An unattended looped test isn’t as realistic as a person sitting with a laptop running on battery power.

This is a bit of a long lead in to performance saving power. Basic or general purpose operating systems and general purpose CPUs have been tuned for decades to wait for disk IO to the point that I believe the OS and CPU don’t know how to run very well without long IO waits. According to most specifications, a 2.5″ form factor SSD device, say 256GB compared to a 2.5″ HDD, just about any capacity, the power specifications are surprisingly comparable. Each is roughly 3 watts active and 0.06 to 0.1 watts in the most inactive state. Different brands behave differently depending on controller, but they’re in the ballpark.

Now, obviously high capacity goes to the hard disk, performance goes to the SSD device, power is a draw, and cost goes to the hard disk, to a point and when looking at individual devices. In looking at the performance specifications, an SSD device can do sequential reads at 250 MB/s and sequential writes at 170 MB/s over SATA-II. The random performance is more impressive, 35,000 IOPS random 4 KB block reads and 3,300 IOPS random 4KB block writes. This is for a smaller Intel X25-E SSD device. The question is how many hard disks would it take to get to 35,000 IOPS random 4 KB reads? I come up with about 280 hard disks using 125 IOPS average and that’s sort of cheating (short stroking). You could get more from a 15Krpm hard disk, but now we’re out of the ballpark in so many ways.

Now add up the 3 watts per hard disk, add in the space required to enclose, power and cool these hard disks, add in the cost of the hard disks, and depending on many other factors, the license cost of the capacity of these hard disks. Assuming you needed the capacity of 280 hard disk then everything’s fine. But if you need only the performance equivalent of 280 hard disks and not the capacity, then you are now paying too much.

These are some of the challenges facing some of the HPC datacenters today. I’ve written about this somewhat in a previous blog here. Typically, high performance storage meant a high quantity of hard disks aggregated together to perform like one fast disk. This takes up space, power and cooling, and other costs around complexity of many devices configured to work as one. However, I’m not saying hard disks are going away anytime soon. Probably not even in my life time. But one thing is for sure, there will be more blends of SDD, magnetic storage and even optical storage in the datacenter, small businesses, the home, and elsewhere. We are getting better at managing data, and soon we won’t even have to worry about that.

Related Posts Plugin for WordPress, Blogger...

Comments (6 )

Sim Alam on 17 Mar 2010 at 5:21 pm

Hi Ken,

An interesting article Ken. Does HGST have anything to say on SSD power consumption yet?

Am I reading between the lines too much in thinking you are hinting at future HDS sub-LUN tiering and automation in your last couple of sentences?

Cheers,
Sim

Vinod Subramaniam on 17 Mar 2010 at 9:21 pm

Ken

Correct me if I’m wrong. Right now if I have a 7D+1P SSD RAID Group behind a 8MP DKA PAIR on a USPV, since each DKA MP will max out at 5000 IOPS it works out to 40,0000 IOPS per 7D+1P RAID Group. If the workload is 100% sequential writes at 4k block size it works out to 20MB/sec per SSD. How can you do 400MB/sec over FC to a SSD right now on the USPV ?

The point is that most existing Storage Boxes the RAID controller is the bottleneck and I cannot see how one can push a SSD to its limits.

There are many promising applications for SSD’s though. I guess reliability will be a major factor in industry adoption.

Oracle seems to be promoting SSD’s as a L2 cache and calls it Oracle Flash Cache.

One could place AIX Active Memory Shares on SSD’s.

If one could use SSD’s as a L2 Cache within a Storage Device then writethrough mode during L1 Cache Maintenance could be avoided by accepting and mirroring writes on L2 Cache across power boundaries.

In most modular arrays asynchronous long distance replication is an issue owing to limited cache and controller capability. I guess SSD’s could be used here as some kind of a store and forward pool.

The possibilities are endless, however SSD reliability metrics is something missing from most vendor websites.

– Vinod

Ken Wood on 17 Mar 2010 at 11:36 pm

Hi Sim, thanks for your comment and question.

As you may already know, HGST and Intel have been in a joint development program to design and build enterprise-class SSD devices for the enterprise markets. This is SAS and FC attached SSD devices that would complement existing enterprise-class HDDs. According the HGST, these devices could require as much as 90% less power than a typical 3.5″ enterprise-class HDDs – drives spinning at 15krpm.

Sim on 18 Mar 2010 at 2:14 am

Thanks for the response Ken, the power savings and performance are intriguing but as Vinod points out can the existing arrays handle the IOPS and throughput of SSDs? What about the reliability?

Has HDS got any feedback from the customers that have deployed SSDs?

Ken Wood on 18 Mar 2010 at 10:57 am

Good to hear from you again Vinod.

Let me start by backing into your comments. I violently agree with you that today’s SSD technology will change the way we architect storage and design storage solutions. It also puts a new hammer in our collective tool boxes when solving problems. However, this goes beyond just storage. Obviously, SSDs have been around for a long time, but only recently has the economics and form factor put this technology into the mainstream storage arena. But, computing models will also have to change. As I briefly stated (which is also the topic of a future blog of mine) computing and memory management architectures and operating systems will have to accommodate a larger/longer stream of single digit microsecond responses from mass storage devices. I say “longer stream” because our cached systems easily do this, but at some point the servers got a couple of millisecond streams as a breather. This is in the realm of the general purpose uses versus the hybrid computing approaches already adopted in the HPC space.

As for SSD reliability, that’s why Hitachi and Intel are jointly developing enterprise-class SSD devices. It’s a funny road for SSD devices, but not a new one. SSD storage use to be an EXTREMELY high end storage device in the old days (supply your own definition of “old” here). You could use “exotic solution” here as well. But with today’s SSD devices, the form factor, capacity, economics and power consumption attributes shot this technology all the way to the mobile device markets and implied a consumer-like label. Now, it’s bubbling back up to the server, SAN and enterprise space. This path, while crooked, isn’t new. I still remember the debates of workstation-class SCSI disks used for mainframe storage as being a joke. Anyway, I’ll dig into what defines an enterprise-class SSD device from my colleagues at HGST and save that for another blog. Needless to say, if Hitachi is developing an enterprise-class SSD device with Intel, the outcome will be what you would expect from a device classified as “enterprise”.

Vinod Subramaniam on 19 Mar 2010 at 10:18 am

Ken. Thanks for the response.

It looks like more and more SSD manufacturers are working on moving more of the RAID controller intelligence on to the SSD itself. Some examples are data placement logic, monitoring logic. It work be long before some part of the XOR calculations are made on the SSD itself.

– Vinod

Ken Wood

Data Center Advisors

Connect with Us

   

Most Popular

0Modernize IT or Fail