United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Integration is one thing…doing it intelligently is another

by Hu Yoshida on Feb 9, 2011

By Michael Heffernan

Yesterday, we announced VAAI certification of Hitachi Virtual Storage Platform both for the product itself and when used with external storage.  As stated in yesterday’s accompanying blog post, we are the only storage vendor to support all three of the VAAI “primitives” on a virtualized storage platform.  We think it’s a big deal, but what does this really mean?

vsp

First and foremost, it means that  external storage (or also known as a Storage Virtualization Device or “SVD” ) will support 100+ external storage devices, which will inherit all 3 VAAI primitives with Hitachi Virtual Storage Platform (VSP) external storage. These APIs are available in vSphere 4.1 and enables ESX hosts to offload storage processing to the system.

There are three specific primitives:

  • Full Copy, which enables the storage arrays to make full copies of data within the array without having to have the ESX Server read and write the data. This speeds up VM cloning and Storage vMotion (in the storage system).
  • Block Zeroing, which enables storage arrays to zero out a large number of blocks to speed up provisioning of new VMs.
  • Hardware Assisted Locking, which provides an alternative means to protect the metadata for VMFS cluster file systems and thereby improving the scalability of large ESX server farms sharing a datastore. This is the primary benefit for customers. In versions prior to vSphere 4.1, the ESX kernel managed the VMFS clustered file system and locked the entire LUN (using SCSI reserve) when Virtual Machines were being accessed. This new modification to the ESX Kernel and in conjunction with the modification to the microcode offloads this SCSI reserve into the system.

Now let’s digest this some more and put some things in perspective…

Hitachi has been providing customers the ability to externalize storage through the USP/NSC, USPV/VM and now VSP arrays for the past 8 years. Customers have gained massive benefits using this very mature technology to virtualize storage (especially for VMware) – not to mention the ability to migrate and rescue old storage arrays from performance issues when customers overloaded them with large amount of VM’s. The fundamental point here is all these externalized storage arrays now are able to leverage all the fundamental benefits of Hitachi’s technology. VAAI is no different.

VAAI opens the doors for Hitachi to introduce the fundamental key hardware features for which we are known, and what our customers have been using for years on highly available and high performance mainframe & Unix systems, and now to the ESX host. I suppose you can say it’s like match making. “Hello VAAI let me introduce you to HDP”, a perfect match…

Hitachi Dynamic Provisioning (HDP) when used in conjunction with VAAI, provides a key unique combination that the entire VMware Infrastructure can now take advantage of. HDP by itself provides a VMFS volume (a shared disk clustered file system), which is the foundation to support very heavy I/O workloads with wide striping and pools for ease of provisioning. When we introduce VAAI “Hardware Assisted Locking” to HDP, this now eliminates contention with SCSI reserves and enables a VMFS volume to take full advantage of up to 2TB high I/O VMFS datastores.  Additionally, introducing the VAAI “Block Zeroing” primitive to HDP creates hardware thin provisioned VMDK with storage reclamation, both of which translates into true integration all within the hardware.

So what does this all mean to the VMware admin? Quite simply, it means massive benefits and massive simplification, with no need to think anymore as the hardware does all the work.

VMware admins want simplicity, high performance and large VMFS volumes (datastores) and not have to be concerned with managing software-based thin provisioning. VMware admins want to use vMotion whenever they want and not have to be concerned with looking at the subsystem’s performance or installing load balancing software on the ESX host. VMware admins want to create clones, take snapshots, delete VM’s and not have to be concerned with performance and using up disk space.

So, if we summarize one example of an activity that a VMware Admin must perform, i.e creating a new Virtual Machine (VMDK), they are faced with a decision. What format do I choose: “thin”, “zeroedthick” or “eagerzeroedthick”? In other words, do I use thin provisioning in software or hardware?  I won’t go into detail for each of these, but only one format really matters — “eagerzeroedthick.” Why? It doesn’t have warm up penalties as it pre-allocates all zeros to the physical disk on the VMDK creation hence guaranteed VM performance. The fundamental drawback is it uses all valuable disk space for the entire size of the VM.

Leveraging intelligence built into the hardware (VAAI + HDP) enables an “eagerzeroedthick” VM to be dynamically allocated and ready for business without having to wait for its disks to be formatted. This could save hours of delay and processor cycles. It also eliminates the need for VMware to perform the overhead to do VM thin provisioning in software. Now VMware admins can let the VSP with all external storage devices utilize HDP + VAAI to do the VM thin provisioning in hardware and save their cycles for other work.

And to top it all off, add Hitachi Dynamic Tiering to HDP and automate page-based granular data movement for the VMFS volume for highest efficiency and throughput of each virtual machine.

With this new introduction, HDS has developed the fundamental formula to support the true scalable data center infrastructure solution for virtual servers with the ability to protect customers’ existing storage assets.

– Michael Heffernan, global product manager, Server Virtualization, Hitachi Data Systems

Michael Heffernan is the global product manager for Server Virtualization at Hitachi Data Systems. In this position, Michael ensures that the core technologies of Hitachi storage and servers are aligned with hypervisors in order to create integrated technology solutions that will benefit Hitachi customers. Previous to this role, Michael worked in Asia Pacific as director of Service Delivery at Hitachi Data Systems, where he leveraged the company’s delivery methodology and storage service tools to deliver a superior solutions service experience for customers.

Related Posts Plugin for WordPress, Blogger...

Comments (2 )

Chris M Evans on 09 Feb 2011 at 10:58 am

Hu/Heff

I have a few thoughts based on your last couple of days. Firstly. I see Hardware Assisted Locking (Atomic Test & Set) as being the most useful feature for improving performance. After that would be Block Zeroing (or WRITE SAME) and then least useful Full Copy (EXTENDED COPY). The reason I say that is because I don’t believe that VM cloning and vMotion activities are things that go on in a production system on a constant basis.

So from reading the SCSI protocol specification, I believe that Extended Copy should go cross-array i.e. allow a LUN to be moved to another storage device. What’s your view on that? I think cross-array functionality would be much more useful.

One other thought. The view with all of these commands is that it will improve performance. However what restrictions are there on the array (or ESXi) to prevent many commands being issued at the same time? Surely if an administrator throws in a request to replicate a VM 1000 times, then that will have a negative impact on array performance and if that array is shared, could affect other workloads. Thoughts?

Chris

[...] VMware provides many benefits in server consolidation, performance, scalability, availability, and ease of use. However, this introduces a greater demand on storage systems, since standalone host servers that once had their own LUNs now share a VMFS file system (datastore), which requires the single LUN to provide a solid high performance foundation for the consolidated VM’s. They must share their connections to the data store with other Virtual Machines that act independently, with different peak times and different access patterns. When you throw in additional activities like cloning and vMotion, you can see that a single shared LUN approach is susceptible to workload imbalances, which are difficult to manage. ESX 4.1 provides VAAI to relieve many of the bottlenecks around formatting VMDK disks, cloning, and SCSI reserve, which has been described in a previous post by Michael Heffernan. [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Recent Videos

Switch to our mobile site