United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Data Center Transformation Part 2: Server Transformation

by Hu Yoshida on Jul 13, 2010

This is the second post in my series on data center transformation. In my first post, I offered up several warning signs that indicate why it is time to take action and transform your data center to be agile, sustainable, and business-oriented. We believe achieving a meaningful level of responsiveness to changing business and economic conditions requires a fundamental transformation of the data center that starts by designing flexibility and efficiency into the architecture.

In this post, I want to examine one of the major transforming trends in the data center — the movement to virtual servers, which is enhanced with the power of multi-core server technologies.

Virtual servers have solved the problem of server proliferation, which saw power hungry servers configured for peak demand, but idling at 10% utilization for most of the day.  It also improved business agility by enabling data centers to spin up new servers on demand, load balance workload, and do site recovery across a pool of server resources. Today, there are more virtual servers deployed than there are physical servers.

Consolidating application servers used to be a difficult task since applications had hooks into the underlying operating systems and these hooks had to be undone in order to consolidate the applications. VMware solved that by taking the application as well as its operating system and stacking them up into a single host server. The operating system and the application storage were virtualized on to a VMDK or virtual machine disk, which is a file within a VMFS or virtual machine file system.  This is a clustered file system that enables the attachment of virtual server clusters and supports vMotion or the movement of Virtual Machines across physical servers in the cluster. These virtual server clusters can drive hundreds of virtual machines through the VMFS. That means the storage arrays that service the VMFS must be able to scale to hundreds of times the workload that they normally would see when attached to stand alone servers.

Why storage arrays are needed to scale up with virtual servers

Hitachi Data Systems provides storage systems that can scale up to meet the increasing I/O demands of virtual servers. These storage systems include the USP V/VM with its global cache; the AMS 2000 with active/active controller; and High Performance NAS (HNAS) with its hardware based NAS engine, which front ends an AMS 2000 or USP V/VM.

While virtual servers are a major step forward in the transformation of the data center it cannot do everything itself. Some of the work, especially for I/O, needs to be off loaded to the storage arrays in order for virtual servers to scale beyond their current limitations and increase their ROI. VMware is aware of this and has provided APIs for array integration (VAAI). On Tuesday, we announced the first of our storage array support for vSphere 4.1 with our AMS 2000, where we use these APIs to support:
- Hardware-assisted Locking: Enables more efficient locking at the sector level than the LUN level between ESX hosts which share a VMFS volume
- Full Copy: Enables the storage arrays to make full copies of data within the array without the ESX Server reading and writing the data
- Block Zeroing: Enables storage arrays to zero out a large number of blocks to enhance the deployment of large-scale VMs.

Why Servers and storage must cooperate in the transformation of the Data Center

Hardware assisted locking is a good example of how the sharing of workload between server and storage can facilitate the transformation of the data center. Without this feature, virtual servers and ESX hosts would have to use a SCSI reserve to write to a shared VMFS volume. This locks the entire volume and impacts functions like creating virtual machines, creating templates, powering on virtual machines, growing files for snapshots, allocating space for thin virtual disks, and vMotion. By using an Atomic Test and Set command the storage array can lock at the sector level and leave the rest of the LUN available for access by other ESX hosts. This could improve performance by 4 times or enable 25% more virtual machine I/Os. This also means more scale up workload for the storage array.

Are there other examples where sharing the workload is important?

Another example of where we need better cooperation is in the area of content data which is estimated to be growing at over 121% per year. Here you would expect Enterprise Content Management (ECM) systems to be in high demand, but in fact ECM see <4% of the enterprise data today. This is mainly because ECM solutions do not scale. They try to do everything within their own proprietary stack, including ingestion, indexing, storage, refresh, retention policies, life cycle management, protection, retrieval, dissemination, etc. As a result, very few ECM solutions can scale beyond tens of TB, when they need to be scaling to Petabytes, and eventually Exabytes. The only way to solve this problem is to offload some of the storage and management functions to intelligent storage systems so that ECM can concentrate on the content and scale beyond their current limitations. The interface between ECM and storage must be based on open protocols and not proprietary API’s which limit the interface to a few chosen vendors. One popular content storage system uses a hash of the content as an address into their system, which locks the content to that vendors system. Hitachi Data Systems content storage platform, HCP, provides the ability to ingest content across standard protocols and store multiple modalities of data, file, documents, email, PACS, etc., into a common repository with safe multi-tenancy. If ECM vendors could provide open APIs like VMware does and offload more of the workload to storage systems we could go a long way to address the explosion in content data.

Hopeful Signs of better cooperation

While VMware is owned by a storage company, they are to be commended for opening up their API’s to other storage vendors. This cooperation between applications, systems, and storage vendors in sharing the workload is required for data center transformation. We are beginning to see more of this cooperation from vendors like VMware, Microsoft, Symantec, and Oracle. There is also great progress working through SNIA and ANSI T10. Vendors are coming to the realization that no one can do everything themselves. Data center transformation will take all of us working together.

Related Posts Plugin for WordPress, Blogger...

Comments (7 )

Sim Alam on 13 Jul 2010 at 10:01 pm

Hi Hu,

It is good to see HDS being early to announce tighter integration with VMware. Are there any time frames for when the VAAI features will be available for use with AMS2000 and USP V/VM?

Cheers,
Sim

Brian on 19 Jul 2010 at 10:16 am

Hu, I really enjoyed this article. I think EMC’s unified storage systems would be a great fit for your team. EMC guarantees you’ll use 20% less raw unified storage capacity than you do with the competition, which increases your overall storage capacity and reduces your costs. EMC also has a number of great new and upcoming features that give their storage systems a leg up on the competition. Have a look at this paper, and I think you’ll agree that EMC’s unified storage systems are going to be your best option. http://bit.ly/ao57rm– Brian, EMC Social Outreach Team

[...] is the third part in my series on data center transformations. My last post was on server transformation and the impact of virtual servers on the data center. In this post I will address the impact of [...]

Hubert Yoshida on 23 Jul 2010 at 8:13 am

Hi Sim, thanks for the blog response and yes we have been working hard with VMware to make this integration happen as we see this as a fantastic technology enhancement that will really benefit our joint customers. We will have support for all our arrays, the first being the AMS2000 family which is now supported at GA and followed by our Enterprise Arrays in the near future.

Hubert Yoshida on 23 Jul 2010 at 8:46 am

Hello Brian, glad you liked this post. I was not able to follow your link and I am not sure what EMC means by “Unified Storage” It appears to be listed under Celerra file storage. I think we have a better solution in our Zero Page Reclaim offering which will study the customer’s current utilization and guarantee an amount of capacity that can be recovered based on their specific situation. Since we can use this with our USP V/VM storage virtualization controller it can support external storage from other vendors, and support our HNAS file servers as well.
http://www.hds.com/services/assess-and-consult/storage-reclamation/

[...] my series on data center transformation I started with Server transformation and the closer integration of server and storage virtualization through the use of VAAI or vStorage [...]

[...] from Gartner, who warned of a coming train wreck if we do not modernize our data centers. Then in part 2, I talked about the transformation of servers with the movement to virtual servers and multi-core [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Recent Videos

Switch to our mobile site