Multi-Core Once More
by Michael Hay on Apr 15, 2009
Barry, Barry, Barry. Well with the use of prescient I think that you might have read Dune recently. Anyway I want to get back to a point in my previous posts on programming for multi-core processors. The notion Barry asserts that Hitachi is somehow implementing an archaic storage platform is pure rubbish. Even Intel and AMD are recognizing that there is a need for special purpose processing in the form of FPGAs. The following article over at the Register shows how XtremeData is taking advantage of FPGAs that are socket compatible with Opteron and Xeon processors for more advanced computing tasks in the high performance computing space. I really encourage everyone who reads this to check out what is going on quietly in the space of FPGAs as it is a revolution which will impact high performance computing applications.
Another area that is particualarly hot is the use of ASICs in high performance computing applications. IBM’s BlueGene/P system make use of two kinds of ASICs one is a compute ASIC and the other is a link ASIC. (IBM’s Redbook on programming for this platform can be found here.) I could continue on and on, but advances in the super computing field are about 5-10 years out from what happens in the business compute space. Since storage systems are really complex special purpose compute devices in my mind, I would say that through the use of hybrid systems (those that use general purpose processors, FPGAs and ASICs) Hitachi actually is right out there with the best of them implementing advanced technologies. So Barry maybe you should be reading more than Dune and EMC’s marketing materials.
Comments (6 )
Just because everyone else remains mired in the long-standing practice of costly custom hardware doesn’t mean that it’s the way of the future.
Intel continues to drive Moore’s law, reducing the cost of MIPS exponentially – software that can take advantage of the performance benefits of their multi-core processors can thus innovate and adapt faster than any custom-hardware-based solution.
The fact your own USP-V hardware hasn’t changed in over 4 years, while EMC has delivered both DMX4 *AND* V-Max in that same timespan is evidence of the benefits of agility.
Going forward, I expect Intel multi-core will enable V-Max to methodically extend its lead as others struggle to deal with “the confinements of the backplane.”
Barry, I think that you are completely not understanding the logic I’m putting forward. My point was that pinning all hopes on multi-core systems is doomed from the beginning. I would guess that even if you pick apart the V-MAX and the DMX you’ll find ASICs for things like the Fibre channel protocol, perhaps a Tacyhon chip or two or three? If you are stuck on the backplane point, perhaps you should go and take a look at Cisco’s technology, Intel’s bladeserver reference design stack which are based on a backplane approaches, etc. Further Hitachi long since removed the limitations of the backplane with UVM and virtualization. Basically with externally attached storage we are distributing things like data movement, LUN formatting, RAID, caching, etc. Where is EMC in the mix of this? The answer is nowhere. By the way how’s EMC Widesky, Infoscape or ILM 2.0, you know the products of EMC’s fantastic marketing machine?
Gee, so you’ve finally gotten USP-V clustering to work, huh?
Hitachi has been promising USP-V clustering since before the GA of the USP-V…I had heard just last week that the scheduled beta was cancelled yet again.
Yes, EMC does use off-the-shelf, commodity components in their arrays, including the Tachyon and other interface controllers. But where DMX had 7 custom ASICs of EMC design, V-Max uses only one. Everything else enjoys the lower costs borne of higher volume –
But I have no idea why you think Widesky and Infoscape have anything to do with multi-core processor discussions…or why you find it necessary to insult EMC’s marketing department…
Perhaps you’re just a tiny bit jealous?
I can see why: some HDS Brainiac thought last Tuesday would be a good day to make the EARTH SHATTERING announcement of Rev 2.0 of your VMware SRM adapter.
Apparently the HDS marketing machine really didn’t want anyone to notice, huh? I mean, it’s not like EMC didn’t tell everybody that something big was happening on April 14th…
I guess we should just admit that this multi-core discussion as a point we disagree upon. EMC has the skill and expertise to leverage multi-core-based massively parallel I/O processing, and Hitachi chooses a different path. It is what it is…
But let’s remember this conversation – I suspect I’ll get to use it again in a few years when your engineers finally figure out how to copy EMC so that you too can leverage the Moore’s Law curve and the fruits of all those brilliant engineers over at Intel.
Moore’s Law applies to FPGAs: http://www.ciol.com/Semicon/Design-Trends/News-Reports/FPGAs-and-Moores-Law/111108112450/0/. BTW – never dissed Intel my hypothesis was on hybrid systems. We may not agree and that is okay.
Like I said – we agree to disagree.
That V-Max does 2x more than USP-V with only 32 processors is the real differentiator in cost, power and cooling.
Customers will see the benefits of multi-core in their wallets.
Have your fellow engineers started trying to copy V-Max yet?
[...] Whilst HDS and EMC throw rocks at each other with regards to whether it is better to build custom parts or take things off the shelf and just use custom when you require (I’m expect the other Barry to sit on his hands but there are good reasons why the SVC team decided to build out of commodity parts and I suspect that they are very similar reasons to EMCs). I think we should look beyond the hardware and look at what is coming down the line to us. [...]