Turning Concept Into Reality
by Hu Yoshida on Sep 13, 2011
There has been quite a buzz stemming from some new directions presented at VMworld by VMware, specifically around virtual volumes. Since I was not able to attend due to customer commitments in Asia, I have asked Michael Heffernan to guest again on this blog to put in perspective some of the discussion around virtual volumes.
With VMworld 2011 behind us, there has been a deluge of blogs out there focused on a new future concept that VMware has created—this involves a completely different approach to how a virtual disk is mounted on storage. VMware calls these new concepts I/O Demultiplexers, capacity pools and VM volumes. The way they have proposed this is for all storage vendors to come up with, and build this concept or prototype using a new API.
BUT (and it is a big BUT) this is much more than just an API–it requires a radical change to the design, introducing the concept of a storage container, and modifying the control path which requires many considerations specifically for a storage vendor. The process, in which virtual volumes are created and managed, and then hosts access to the VM metadata residing on this new vVOL, will be a completely new concept.
Now, in doing this, many other considerations must be taken into account, like q-depth, transport protocol, cache utilization, thin provisioning and perhaps looking at I/O profiling and the types of media to be used (like SSD for example).
Hitachi is a member of this VMware API Program for I/O Demux (vVOL), and has been since its beginning. We have, and will continue to invest R&D with VMware engineering, both on this project and future projects. However, at Hitachi we do not treat R&D lightly. Prototyping is done with design verification, which is a key phase of our product development lifecycle. In this phase we also conduct thorough tests to demonstrate or prove the aspects of the design or concept—which in this case VMware proposed. Through our R&D efforts and this process we saw many items that we wanted to focus on for improvement which needed extra attention when using this API; some specifically around removing single points of failure and minimizing software based drivers, or services running in a host. Also, our VSP storage array can currently support up to 64,000 volumes comfortably, which would equate to 64,000 VMs in the case of an I/O Demux implementation. Therefore we want to ensure that when we use and build something like I/O Demux, we can fully leverage it to true backend performance, coupled with storage virtualization, HDP/HDT technologies, disk technologies and replication that does not place any data at risk of corruption and loss.
One of our core values is to provide data integrity and data protection. In this design we will ensure that we will use key features of VSP’s internal switch matrix and custom ASIC, building intelligence into the microcode. We will also ensure that when building any prototype, we can provide a robust and solid design—especially for scalability and performance. It is not in our practice to demo publically before a product is ready for GA and completely certified.
So, in the case of VMworld 2011, we chose not to participate in session #VSP3205. When we have gone through extensive R&D, explored all potential Q&As, and built several prototypes, we will select the best possible configuration and design that the customer will benefit from.
Additionally, we will ensure with VMware directly that we certify all components and use the combination of the ESX API framework and microcode enhancements inside the storage array.
Having been a customer of Hitachi for many years and now working on this side of the fence with our teams in Japan, I can honestly say that we definitely know what we are doing, and I have a lot more appreciation of our Q&A processes. Hitachi treats customer data as priority number one.
I do commend VMware engineering and product management – especially VMware storage product manager Vijay Ramachandran and VMware principal engineer Satyam Vaghani – for taking this leap forward, as it will revolutionize how we will design and provision VMs to storage. This will require a lot of effort from both VMware and storage partners to ensure we collaboratively build something that is sellable, certified and supported.
There will be many unanswered questions as we continue this joint effort, and once we spend more time in the R&D labs, I will probably have a lot more to say. One I can already anticipate is what happens to NFS (conventional file system) and VMFS (VMwares own clustered shared disk file system) for mounting VMs. Do we run these file systems in parallel or do they go away completely? And then the debate begins… NFS vs VMFS vs I/O Demux…
[...] LUNs. Note: vVol doesn’t have an official name yet. Hopefully it won’t be “VMware API Program for I/O Demux (VAPID)” [...]