United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

In the Storage Array or in VMFS?

by Hu Yoshida on Mar 26, 2013

VMware provides a rich menu of storage services, which can be used to enhance commodity storage systems. However, VMware can provide more services for the application if they can offload the cycles that are used for storage services to intelligent storage systems. VMware realizes this and provides VAAI to offload services like SCSI reservation, disk formatting, and disk-to-disk data movement, to storage arrays that support those APIs. There are some storage services that can be done by VMware or by an intelligent storage system and a debate arises as to where that should be done.

A question that comes up quite often in VMware environments is: Where is the best place to do thin provisioning? Do I expect the array to provide this or do I use the thin VMDK format?

Here is a response by Francois Zimmermann,  chief technologist for Virtualized Infrastructure Solutions, Hitachi Data Systems EMEA..

The short answer is: Always manage thin provisioning in the storage array not in  VMFS. This is because thin provisioning introduces a requirement for effective monitoring; if I run out of capacity then I have a VM outage. This monitoring overhead is proportional to the number of elements you have to manage. If I thin provision at VMFS level then I have to monitor hundreds of individual file systems for capacity utilization. If I thin provision using a Hitachi Dynamic Provisioning and Hitachi Dynamic Tiering pool then I only need to manage and replenish a few large pools across my entire environment (up to 5PB per pool).  It is much easier to replenish storage in a global pool than to extend hundreds of individual VMFS devices.

The good news is that it is really easy to offload thin provisioning to Hitachi Data Systems arrays thanks to the rock solid VAAI implementation. When I want to offload all thin provisioning processing to the storage array I just use the “Thick Provisioned Eager Zeroed” virtual disk format. VMware uses VAAI to offload block zeroing and zero page reclaim to the storage array and ensures that the VMDK starts thin and stays thin. The VMDK looks “fat” to VMware but under the covers the array is only going to be allocating pages as required. The TP-STUN VAAI primitive also allows the array to expose alarms for out of space conditions up to VMware so that actions can be taken by the hypervisor.

This raises a rather interesting point about what we mean by Software Defined Data Center. Our job is not to provide dumb infrastructure, our job is to provide smart, scalable infrastructure services that can be controlled from the customers cloud management platform via open APIs. There are a number of storage services that are more efficient if you run the processing as close as possible to the data.

Thin provisioning in the array using Hitachi Dynamic Provisioning is a great example of this. When I configure a dynamic provisioning pool I divide the environment into two management domains:

·         Tactical tasks are performed “in front of the pool.” A tactical task is anything that I would get out of bed at 3:00 a.m. to do (think of provisioning or expanding LUNs, for example). The pool manages page distribution across physical resources and hides the complexity of the physical environment. Our job is to expose open APIs for all tactical tasks to the cloud management platform so that resources can be consumed on demand.

·         Strategic tasks are performed “behind the pool.” Strategic tasks are executed on monthly or quarterly cycles and include things like capacity planning and infrastructure optimization. Our job is to provide a rich toolset that includes infrastructure analytics and modeling in order to enable infrastructure architects to make more effective decisions at this layer.

The idea of how to hide complexity and provide a comprehensive automation framework for tactical tasks is something that also underpins our approach to building Hitachi Unified Compute Platform Director. In this orchestration framework we extend these principles to all elements of the stack: hypervisor, compute, network and, storage….but perhaps this is a better subject for a separate blog.

Related Posts Plugin for WordPress, Blogger...

Comments (4 )

Post Comment

Cris on 31 Mar 2013 at 9:49 pm

Hu, while I agree with Francois’ logic, the argument isn’t this simple in the real world.

In many environments there is a greater level of expertise in the virtualisation layer than in storage layer. Many admins (particularly in smaller environment) are more operationally focused in VMware and the storage environment isn’t considered daily. This is important because it leads into a point about monitoring. Monitoring is key to this discussion. When it comes down to it regardless of the layer used or if we are talking about proactive or reactive monitoring, the user needs to be aware that a threshold is being reached and corrective action needs to be taken. While both system offer monitoring and both do a good job of it, their is a dependency on the user to understand what they are looking at. vSphere tends to be much more aggressive about alerting users of a potential problem and it is also much more intuitive. It offering a basic traffic light colour system which most people can relate to and it reminds the user constantly (through the alerts tab visible in every screen). Storage alerts are less intuitive and appear less frequently when interacting with the system.

In addition there is something to be said about resource boundaries and what happens with each method when a full condition is reached. A single datastore filling is bad for the VMs its servicing (STUN has of course helped), but this doesn’t compare to a Pool FULL condition on the array which affects all VMs that reside on LUNs that the pool houses.

There is still the point you made about the number of objects to manage which is relevant, but VMware understands that admins can’t manage storage correctly and constructs (features) such as storage clusters have been designed help solve this problem. With datastore clusters capacity is managed at datastore cluster object level and not the individual datastores, this helps getting the number of managed objects down in large environments that are correctly designed.

The approach we generally use to determine where thin provisioning should be done is dependent on the level of monitoring as well as the operational skills in the environment. As a minimum we use a simple rule:

Level of monitoring and skill = Provisioning used

Truth Table:

VMware (High) and Storage (High) = Thin provision both (Thin/Thin)
VMware (High) and Storage (Low) = Thin provision in VMware (Thin/Thick)
VMware (Low) and Storage (High) = Thin provision on the array (Thick/Thin)
VMware (low) and Storage (Low) = Thin provision on none (Thick/Thick)

The first situation is only common in larger environments where the resources have a mature understanding of storage. The next two are the most common, with VMware over Storage skills being more common. A lot of storage managed service customers fall into this category. The array(s) are black boxes to them, sure they might be do regular takes (create, expand and provision LUNs), but they don’t understand the detail and they pay for someone else to manage it. We rarely find the last situation and when do do, we focus on educating the customer rather than going for a complete Thick/Thick solution.

My two cents.

Hu Yoshida on 05 Apr 2013 at 9:27 am

Hello Cris, thanks for the feedback.

I think you will get some debate from our readers about a greater level of expertise in the virtualization layer than in the storage layer without providing some clarification. They are focused on different parts of the solution. If the storage administrator does his job well, the virtualization administrator should not have to worry about storage and concentrate his attention on the virtual machines and their application. Out of space reporting does not require any special skills or software. Whenever the storage array crosses a hard or soft threshold, it will fire off alarms or alerts to any standard management framework, including a storage administrator’s smart phone in the middle of the night. It is a whole lot easier for him or her to monitor and replenish a single pool, than trying to sift through the alarms from individual VMFS volumes.

Attached is my colleague Francois Zimmerman’s response to the rest of your comments.

“I can’t really see a use case where you would ever disable thin provisioning and present thick LUNs from Hitachi storage arrays to VMware. My logic is as follows: when I create VMFS LUNs I typically make these fairly large in order to keep the number of devices manageable. If I do this with thick LUNs then I create islands of stranded capacity that are constrained within individual VMware clusters. If I do this with thin LUNs then this capacity remains globally available. Here are a few comparisons:
• 20% free space in a 100TB pool of global capacity is much more useful than 20% of free space spread unevenly over 50 x 2TB VMFS LUNs.
• With all pooling technologies we always see that a small number of larger pools always result in higher utilization levels than a large number of smaller pools. A large number of smaller pools will always be filled unevenly.
• Pooling unused capacity globally will always lower the risk of service disruption due to out of space conditions: it is fairly likely, for example, that a few applications on a shared VMFS might throw additional extents due to an unforeseen event and grow by 100GB but far less likely that applications will suddenly grow to consume 20TB of free capacity.

We also have some great controls to ensure that you can define policies per pool on the storage array to make this safe. For example:
• You can specify an over-subscription level that allows you to control storage over-subscription (default is 140% but for VMware you will increase this as your estate grows).
• You can specify an allocation threshold that will propagate alerts and prevent additional allocations when breached (default is 80%).

You have an interesting second point about doing thin provisioning at both levels. I can see certain use cases like VDI Sparse Disks in VMware View where this may be a good fit, but in most use cases this will not improve utilization levels and will just introduce additional complexity and increase the risk of out of space conditions.”

Regards

Francois Zimmermann

Cris on 08 Apr 2013 at 4:41 am

Hu,

It’s not that I disagree with anything that you and Francois have said.

You are right, there will be some debate about expertise from your readers.. but all your readers (like myself) are storage resources.

As VMware keeps telling us, the geo I operate in (APAC) is one of the largest adopters of visualization in the world. No matter which organizational category you look into (customer, partner or vendor) or even markets (segment or employment :-) ) the need for visualization skills heavily outweighs storage skills. Your statement “If the storage administrator does his job well, the visualization administrator should not have to worry about storage and concentrate his attention on the virtual machines and their application” is somewhat orthodox.
There are many customers in the geo I operate in (some that are 2nd and 3rd generations HDS enterprise customers (USP/NSC55->USPV/VM->HUSVM/VSP)) that only manage from the visualization layers upwards.

Some of these customers and many high end SME (high end Modular customers) only really have visualization resources with storage skills and not the other way around. As I’ve said in my first comment, operationally they work with VMware everyday, storage is something that happens every once in awhile and their comfort zone is inside VMware not the array and this is really what it comes down to. Extremely large sites (those with multiple enterprise arrays at multiple sites) tend to have storage administrators as you’ve described or managed services (by partners or HDS directly, it’s just not the norm.

So again I agree with you and Francois, the numbers add up, the logic is sound, the monitoring is all good, but we are dealing with people in less than perfect environments.

People will operate at the level they are comfortable with, not the ones dictated by best practice or any form of logic. As consultants we need to adapt to this behaviour.

As for the thin-on-thin. I’m happy Francois is talking about ‘valid use cases’ (this is a good analytical way of thinking about environments), but again not everyone thinks this way and many of your competitors deploy thin-on-thin by default – you would be amazed (and shocked) with some of the oversubscription ratios I’ve seen.

Hu Yoshida on 12 Apr 2013 at 9:39 am

Cris, Thanks for your comments and engagement. We agree that most people will operate at the level that they are comfortable with. However, that does not lead to innovation. As infrastructure vendors we believe it is our responsibility to provide prescriptive best practices that our users can apply to realize the full benefits of our technology in combination other infrastructure technologies.

Post a Comment





Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Recent Videos

Switch to our mobile site