The Storage Olympics Gets Magical
by Bob Madaio on Apr 1, 2013
To those in the storage world who rejoice in being in-the-know about the ever shifting technology and vendor landscape in front of them, Gartner Magic Quadrants are seen as major events in the “Vendor Olympics” that our industry can often devolve into. Now, by combining multiple disparate storage-related Magic Quadrants into one review of General-Purpose Disk Arrays (made publically available by HDS for you, here) it seems Gartner has created the decathlon of the storage Vendor Olympics.
Midrange, High-End, NAS, Monolithic? Yup, the gang’s all here.
And while you might be a fan or a detractor of Gartner’s methodologies, it does measure two vectors of actual importance to storage customers: a vendor’s ability to execute and its completeness of vision. (Or, in my own plain English – “Can they do what they say?” and “Can they correctly anticipate customer and market needs?”) While my perspective comes squarely on the vendor side, those do seem like pretty appropriate areas to focus on.
Given the new, broader focus of this Magic Quadrant and significant industry chatter that can follow any new Gartner commentary, it seemed relevant to add some thoughts and perspective about it.
First and foremost, the overall positioning of the vendors “feels” about right. There are the expected “Leaders” (with Hitachi/HDS among them) that have built, bought or partnered their way to the top of the pack. While individual positions could be argued, I doubt there were that many surprises in the “Leaders” quadrant.
While it’s best not to fixate on the position of every “dot” and its exact position, the attention there is almost inevitable. I could offer an argument that the unique and growing collaboration between HDS and our parent company, Hitachi, Ltd., offers us a unique edge in the world of information clouds and the machine-to-machine big data of tomorrow that is coming… but overall, being positioned as a clear leader seems to represent our position pretty well.
Note that the two competitors who were positioned ahead of us are storage pure-plays – vendors who cannot offer converged solutions based on their own compute and storage technology. HDS, on the other hand, provides our Unified Compute Platform family with both our own Hitachi-developed servers as well as those from our partner Cisco. So, having Gartner judge our storage offerings as ahead of all the server vendors and within striking distance of the pure plays is an enviable position to be in.
I’ll steer clear of calling out specific commentary about the competition, which would likely devolve into a rather unproductive event in the Vendor Olympics. Instead, I’ll offer some quick thoughts regarding Gartner’s commentary about Hitachi. You’ll note that structurally, after a brief overview of each vendor, Gartner calls out “Strengths” and “Cautions” for all vendors, which in our case seemed to neatly align around three of our product families of Hitachi Virtual Storage Platform (VSP), Hitachi Unified Storage (HUS) and Hitachi NAS Platform (HNAS). (Our entry-enterprise, unified storage system Hitachi Unified Storage VM was too new to be included in this Magic Quadrant.)
Relative to our high-end VSP, Gartner notes how it is “distinguished” due to performance and capacity scalability, proven data protection and replication capabilities, and its “widely used” virtualization function. Sounds about right. It “Cautions” that VSP will be due for a refresh “…within the next six to 12 months…”, apparently drawing on historical industry norms of high-end storage platforms getting refreshed every 3.5 years or so.
I’m not going to be breaking any news about our future high-end roadmap here, but I’m also not that surprised that Gartner’s only question is about the future, and not what customers are buying today. Our continued high-end growth and recent 5-category sweep of Storage Magazine’s 2013 Quality Awards demonstrate that our users are quite pleased with the Hitachi Data Systems technology they are offered today.
In fact, there’s been a tremendous response to how we’ve extended the value of VSP by introducing Flash Acceleration software (press release, here) and our unique Hitachi Accelerated Flash storage hardware (press release, here.) Those announcements were significant, as we not only introduced a unique and specially engineered flash storage option for improved cost, density and durability, but we also fundamentally upgraded our system code to maximize performance when deployed with flash storage capacity. I wonder how many of the other Magic Quadrant leaders have done that?
As for what comes next for VSP and high-end storage from HDS, I think it’s fair to say that our continued hardware and software excellence will only be expanded in terms of the performance, scalability and functionality our customers have come to expect. We’ll continue to deepen and expand upon our leading storage virtualization capabilities in ways that will provide the efficient, flexible and always-on storage pools demanded by next-generation data centers. I’d love to say more, but you’ll need to wait a bit longer for any sneak peeks.
Switching gears, what I love most about Gartner’s commentary about the HUS platform is that it focuses on a fundamental strength – the HUS symmetric active/active controller architecture. This architecture isn’t commonplace in the industry, as the document clearly highlights. This means that the balanced and scalable performance we offer cannot simply be matched by adding some processor megahertz to a lesser architecture. When customers realize an HUS system can automatically load-balance over its block storage controllers and remove the headaches of manual LUN reassignment, conversations quickly turn away from specs and toward how we solve their challenges.
This is also why Gartner’s “Cautions” for the HUS are tough to address, because they focus on how our file and block processing is run by discrete components. This is true, as within our HUS systems our active/active symmetric block storage controllers work with our FPGA-based file modules (which directly correlate to our popular HNAS gateways) to provide access to a common storage pool. And in reality, separate file and block processing is more often the rule than the exception in unified storage systems today.
Customer interest in unified storage has centered on being able to provision from a consolidated, well-utilized pool of storage, and manage file and block functions from a single toolset, and less about the controller integration specifics. For many file and block management functions, we’ve already delivered that unified experience within Hitachi Command Suite, answering a large part of the unified storage promise.
So as long as our discrete components continue to deliver leadership capabilities like our scalable performance and automatic load balancing, 128TB volume sizes, 8PB single namespace sizes, policy-based tiering and replication, application and hypervisor integration and screaming performance (including this new Storage Performance Council example), I’m just not sure how large an issue this really is for most customers.
Lastly, Gartner talks about Hitachi NAS Platform (HNAS). (Note: HNAS technology is also the basis of our file storage capability within our HUS 100 family and HUS VM – so the commentary applies to both.) The “Strengths” point to familiar Hitachi attributes of performance and scalability, while describing how HNAS can be a fit for big data environments and the consolidation “of multiple NAS filers.” The “Cautions” call out a lack of deduplication that “inhibits HNAS competitiveness” in certain applications.
HDS agrees that deduplication is an important requirement for today’s efficiency focused IT customer. In fact, HDS has been shipping a new version of our HNAS software to customers for more than two months, with a leadership-level primary storage deduplication functionality at the core of its new capabilities.
No, this dedupe is not beta. It’s not a controlled release. It’s generally available with real customers and in real production deployments. Our experience with those customers is confirming what we internally expected: we have a winner on our hands.
HNAS deduplication removes many of the normal compromises of primary storage deduplication systems by providing all the expected efficiency improvements without sacrificing file sharing performance and scalability. We can accomplish this by leveraging our high-performance FPGA-based hardware architecture and enabling data deduplication to be an automated process that does not interfere with file sharing workloads. What you end up with is a primary storage deduplication system with less administration, auto throttling intelligence and up to 90% storage reclamation.
While our dedupe capability might not have been shipping before Gartner’s cut-off date for the Magic Quadrant, it’s out in the market, available now and, if I may say so, pretty exciting. You can expect a blog post soon from my colleague Hu Yoshida expanding on the technical details of our dedupe engine.
So while I may have joined into the Vendor Olympics that sometimes surround the publishing of a new Magic Quadrant, I’ll say this… it does feel nice for the qualities to be recognized and for us to be on that storage industry medal podium.
And rest assured, we’ll be paying close attention to what our customers need today and where they are headed so we can keep developing the best solutions for tomorrow’s data centers, because we don’t plan on stepping down off that podium any time soon.
Looks like a gold medal winner to me!