These Are the Voyages of the Virtual Storage Platform
by Michael Hay on Sep 27, 2010
For much of this year, I intentionally have been telling a story about hybrid computing and Hitachi’s innovative prowess. This has been intentional leading up to today with the launch of the new Hitachi Command Suite and the Hitachi Virtual Storage Platform (VSP). Since there are so many things to talk about, I suppose that answering Sim’s question is as good a place to start as any.
So is this and your previous articles preparing us for the next gen of HDS storage that will use a hybrid computing combination of industry standard CPUs mixed with HDS FPGAs?
So yes, Sim, almost. But instead of FPGAs, our hybrid computing implementation utilizes four types of Hitachi purpose-built processors in conjunction with Intel processors. The most important of these we call the Data Accelerator which is a dual core storage I/O specific processor. This processor, in conjunction with Intel, really fulfills the promise of hybrid or “many core” computing and is one of the many reasons why we can do so much with our new platform.
In short, Hitachi is keeping up with leading-edge industry trends leaving our competitors in the dust. (Note, one recently famous storage company, that costs almost $2.4B USD to purchase, also makes a significant claim on their ASIC as providing differentiated value. Can you guess who they are?)
“Scotty can you give me more processing power?”
Since my last post — on Hitachi’s innovation in the USP-V(M) — discussed mechanisms we employed to achieve a more balanced system, I want to start this section from that point and move forward.
Pointedly, we’ve taken the next step to squeeze every last processor cycle we can out of the system. A lot of this comes from the already referenced Hitachi storage-specific dual core processor. These processors are resident both in the front-end and back-end directors and are responsible for offloading the Intel microprocessors from having to perform internal network routing and process host I/O commands. They are dual-core, as previously mentioned, and due to their special purpose nature, when you compare general-purpose processors to them for storage I/O operations on a per core basis, they are about 1.5x the performance.
Now, I’ll be the first to admit that if you try and use them for something more general purpose, they won’t fare as well, but we aren’t releasing a general-purpose server. (As an aside, since GPUs are being used for other applications than displaying graphics, it is interesting to contemplate what the Data Accelerator might be able to do.) A pool of multi-core Intel microprocessors, which are physically resident on the Virtual Storage Directors (VSD) and are roughly comparable to a “node,” joins the Data Accelerators but more like a blade server node than a rack mount node. To achieve a more balanced system over its successor, we treat the microprocessors on the VSDs as a pool parceling out tasks to them in a way very similar to how we place pages onto HDDs in a Hitachi Dynamic Provisioning (HDP) pool.
VSDs, FEDs, BEDs and cache modules are interconnected by the HiStar network in a fully switched infrastructure. In this way users can add more resources based upon their requirements to scale up. When our users’ workload demands continue to increase, the first chassis can be joined by a second chassis realizing a second dimension of scaling, there again as the workload increases, the second chassis can be upgraded with resources scaling up to meet user need. To interconnect the two systems, we did not need to employ a new exotic networking technology or rely on what I referred to as the dead man walking, but instead we mapped HiStar on top of standard PCIe resulting in a high-speed, low latency interconnect between the two systems. All of this together has resulted in some pretty stunning proof points. Here are four of them:
- Performance that is 2.4x higher than the USP-V(M)
- Increasing the number of drives by 78% over the USP-V(M)yet only increasing the footprint by one additional rack unit
- Reducing power consumption by 40% per terabyte
- Provisioning that is 50% faster (more on this last point in minute)
“Scotty, we need you on the bridge!”
All of this power and capability without easy-to-use command and control defeats the purpose of the VSP. So let’s get back to the point on the 50% improvement in provisioning efficiency.
Well, not only have we reengineered the hardware platform, but we’ve also made dramatic improvements to bare metal management on the VSP, the element manager user experience and array native CLI, and finally to the Hitachi Command Suite (HCS). The native element manager package, Storage Navigator, now includes a completely rewritten and re-architected CLI that embeds the functionality of the former RAID Manager CCI interface along with the core provisioning functions. The GUI is remade with all of the bells and whistles you would expect and is completely congruent with the Hitachi Command Suite framework.
This is important because it maps to a key usability principal of consistency. More specifically, we don’t want our users who come from HCS Device Manager to suffer a jarring shock when they need to do deep and detailed configuration changes on a specific VSP system. Since systems and element management is more than just a stunning GUI, we’ve paid attention to other key fundamentals such as separation of the management and data planes. For example, when a Hitachi Dynamic Tiering (HDT – the page-level tiering that automates the page-based movement of data to the most appropriate storage media to both simplify and optimize tier costs and performance) policy is engaged and if at the same time the service processor is down due to an upgrade, the page performance metrics will continue to be collected and the page movement tasks will continue to run.
However, we wanted to do more. Enter the completely re-architected HCS Device Manager. I won’t list the features, as you can get it from our website and from your local account manager. Instead, there are a few points that I want to cover.
For those of you who regularly read the Techno-Muse blog or have met me in person, you will know that I’m working towards a specialization in product design and usability. This is a reflection of Hitachi and Hitachi Data Systems overall, and we are really focused on user experience to improve efficiencies and management scale for our customers. Most recently we’ve had some design accolades, but I truly think Hitachi has moved the ball forward with HCS as a result of deliberate planning, usability testing, competitive benchmarking, and many feedback cycles with users. Maybe the telltale sign is how easy it is to add a volume via HCS. If you can remember the mantra “Which host? How many? What size?”, that is all of the information you will need to provision storage in HCS.
As Hu noted in his blog, another important point is that we wanted our existing USP-V(M), AMS, USP, etc. customers to be able to enjoy this level of usability efficiency as well. As a result, I’m pleased to say that HCS is backwards compatible with some obvious restrictions; for example you cannot manage HDT on an AMS200 as the underlying hardware engine is not there to manage.
I can barely hold her together.
I’m sure there will be a lot of comparative analysis in the blogosphere, via word of mouth, and through matrices that compare “us to them.” Okay I’m not going to go there — but what I am going to say is that since the original planning of this system and since I have been tracking hybrid computing in the IT industry, I feel confident that our hardware, microcode, and management software is cutting edge.
I know that the likes of IBM, Microsoft research, Cray, NVidia, AMD and even Intel are heading down the hybrid computing yellow brick road. In some form all are making it possible for multiple processing elements to work in concert to scale in more than just one dimension. I also know that within the systems management market, there is a rethink of this class of software tied to stack verticalization. In both of these instances I’m proud to say that Hitachi is and will continue to participate in pushing the boundaries of what is possible and imagining what is next.
Twitter Q&A, TODAY, 10:30am PT
Comments (2 )
[...] please take a look at our videos and white papers. Also, please read the blogs published by Michael Hay, Claus Mikkelsen, Pete Gerr, and David Merrill on why today’s announcements are so important [...]
[...] can find more information about our announcements here and blog postings from Hu, Claus, Michael, David, Ken and [...]