Hitachi – Redefining the Server
by Michael Hay on Apr 19, 2010
Today we announced our unified compute platform, which will be inclusive of Hitachi servers and storage as well as other critical infrastructure, e.g. VM, SAN and TCP/IP networking. Most significantly is innovative and novel management software that orchestrates all components within our Scale Units — you can see several components of Hitachi unified compute at MMS. (Note for Hitachi a complete system from VM through storage is something we are calling a Scale Unit. I’ll be talking about some of the concerns in the industry related to this concept as a major point of this post.) If you are at MMS I highly suggest that you check out the demonstration. From the earlier internal demonstrations I have witnessed some pretty novel and innovative capabilities within our orchestration layer. Specifically I’m thinking of the speed and scale of VM deployment on our Scale Units. Innovation like high scale VM deployment is only possible by restricting the variance in configuration options so that we can take maximum advantage of all components within our platform. In short this is the basis of the vertical integration trend in the industry, but we are a little different from the most vocal in the space: IBM and Oracle. You can see it in the announcement text whereby we discuss the fact that both Hyper-V and VMware are going to be supported on the platform. However there is a bit more to it. In fact we will allow customers to deploy their choice of application stack on top of our system. So unlike Exadata from Oracle or similar efforts that IBM is undertaking we are not offering the complete lock-in scenario which customers are currently concerned about. However, customer concern about vertical integration is something that I want to address in this post.
Legendary tales come from the era when the personal computer was first conceived. Altair, Apple, CPM and others are all names and concepts from the history of computing that are now famous. In this era a single person or a small team of hobbyists could easily understand the complete architecture of a computer and tinker with it. As the industry learned and expanded the pace of innovation has accelerated and concepts like Moore’s Law became theorized and proved. Accelerated platform development has resulted in systems today requiring many thousands of people across many companies to realize. What this means is that to cope with the systems individuals or teams alike must retarget their knowledge and tinkering skills. Let me provide a concrete example. I can remember a story of a former neighbor of mine when I lived in New Orleans about hand wrapping an electric motor to get more performance. His target was an early electric remote controlled car he used to race his friends. He found that by doing this really low level tinkering he could get more bang for his buck and win races. Today if you want an electric motor, simply head over to your local Radio Shack and pick one up. This illustrates that something (anything) that was not optimized in the past has progressed to such a point that it is more than just “good enough”, and as a result hand wrapping electric motors is just not done anymore. In the same way the personal computer architecture was once ruled by hobbyists and tinkerers, platform has both increased in complexity and capability such that “hand wrapping” a system is no longer required. This means that people have retargeted their tinkering to activities like over-clocking and buying after market equipment to increase the performance of their platform. (Yes I’m sure that the folks who jail break the iPhone or hack the PS3 so that they can run Linux may have a different opinion, but this is largely a spectacle.) For most people at most companies who have to care about really large scale IT systems even this kind of tinkering is also just not done. Let me explain why.
These days extremely complex tasks have become both possible and easier for end users to do. For instance, let’s consider the case of online banking systems. As a user your front-end web application feels like a single coherent system with fairly simple tasks like getting your account balance. The truth is that on the back end there might be tens of systems that the front end must connect to so that the information on your screen can be presented to you. What this says is that the actual application stacks that make online banking possible are wildly complex. Larger companies have realized that making these super complex systems and tasks very simple for their users makes or saves money, and is therefore where the value is. To get these huge applications deployed requires spending more time on the value and less time on the deep infrastructure. It also requires a great deal of computational power and scale. Unfortunately, because IT organizations have been focused on building systems from discrete components — note in their case the discrete components were complete servers, switches, and storage systems not PCIe cards, processors, memory, and storage — they have been suffering from operations and maintenance costs which are spiraling out of control. Specifically, and it depends on your analyst firm of choice here, the actual operating costs are around 70% of the total cost of ownership of a system whereas the capital acquisition costs for a system contribute to the remaining 30%. While I cannot find it in an easily reproducible way, IDC’s server tracker shows that there continues to be growth in the costs of O&M including environmentals easily outpacing and towering over the one time purchase costs of the equipment. It is this the O&M costs which are both attenuating the speed of deployment for new value added services in large companies and at the same time forcing all of the IT vendors to begin moving in the direction of vertical integration. However because customers have gotten used to what I call the old definition of the server they are now worried that their newly requested need to their IT vendors will result in vendor lock-in.
To me the old definition of the server are the components you would usually place in a standard 19 inch rack connected to the core infrastructure. However with vertical integration in full effect the new definition of the server is the 19 inch rack with all of the components built in including some portion of the infrastructure. Customers are starting to embrace this fact and are beginning to purchase completely populated 19 inch racks, but they are concerned about their vendor putting the screws to them on price and service. Some of this is to be expected because there is a change afoot in the industry and there are always skeptics to the process. These skeptics are partially there to govern the process of change so that everything which should be considered about the change is considered. To these skeptics, I think it is time that we engage and figure out how to resolve your concerns related to vendor lock-in. Let’s work together to figure out where the target of vendor lock-in needs to be directed. I think that what we’re announcing today in some sense is a “have your cake and eat it too” scenario. What I mean is that we will are open to different VM technologies and the OSes that run in them as well as your own applications that you want to run on the platform. Our goal is really to help with the out of control O&M costs by ensuring quality of the newly defined server and let you pick what you want to run on the platform.
The demonstration of the Orchestration software starts today. Below is a screenshot of the Orchestration software layer so you can get an idea of the UI.
Comments (2 )
Social comments and analytics for this post…
This post was mentioned on Twitter by HDScorp: New post: Hitachi – Redefining the Server http://bit.ly/d3HMWV...
[...] Hitachi – Redefining the Server(Apr 19, 2010) [...]