United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Data Center Transformation Part 3: Storage Transformation

by Hu Yoshida on Jul 19, 2010

This is the third part in my series on data center transformations. My last post was on server transformation and the impact of virtual servers on the data center. In this post I will address the impact of storage transformation on the data center.

Data is at the core of the Data center
Data is at the core of the data center, and any effort to transform the data center must involve the movement, provisioning, access, and protection of data which is provided by storage systems. Unlike processing power and network bandwidth which can be consumed like electricity or water, storage capacity is stateful and any transformation of the data center will require the transformation of that storage capacity with minimum disruption to the applications and the rest of the infrastructure. This transformation becomes increasingly difficult as more data is generated and more applications are dependent on access to that data storage. Also many applications are intertwined through the data and coordinated through scripting or large consistency groups which further complicates things. Storage could be the biggest inhibitor to data center transformation, unless it is addressed through the type of storage virtualization that can meet the requirements for transformation. I had the pleasure of discussing this last week with TMCnet’s Rich Tehrani.

Requirements for Storage Virtualization
The first requirement is that it should be easy to implement. It should be as easy to implement as virtual servers in a hypervisor. Storage virtualization solutions that require an extra layer of management to remap physical volumes to virtual volumes, maintain a mapping table which is a single point of failure and a vendor lock in, violate security by cracking or proxying data packets, and require three layers of zoning in the heart of an already complex Storage Area Network, will add more complexity than it is worth. An enterprise storage controller can simply connect external storage through standard FC interfaces, discover the LUNs or volumes that already exist on the external storage, and present them through its global cache to the host server as though it was native storage with all the existing enterprise functionality and performance of the storage controller. Just as simple as it is to virtualize external storage, it should be as simple to de-virtualize, so there is no vendor lockin.  By not remapping the LUN and by writing the data back to the external LUN, the state of the data remains with the external storage and you can de-virtualize simply by disconnecting it and connecting it to a host server where the LUN can be discovered and mounted.
The second requirement is that the storage virtualization controller should be able to enhance whatever storage it virtualizes with tier 1 functionality, like tier 1 storage, global cache, replication, load balancing, dynamic tiered storage, dynamic provisioning, etc. An appliance with limited connectivity, limited cache, limited processing power, and no tier 1 storage, cannot enhance the storage it virtualizes, except to do some limited copies and moves between external storage systems.
A third requirement is scalability. The ability to scale up dynamically to meet the consolidation demands of increasingly powerful virtual server clusters. The ability to scale out dynamically across a pool of shared virtual storage resources instead of a cluster of standalone storage silos. And the ability to extend that scale up and scale out capability to externally attached storage.
A fourth requirement is security in terms of partitioning for safe multi-tenancy, separate address spaces for virtual ports that share the same physical ports, separation of control data from user data for remote maintenance, FC CHAP for end to end authentication, encryption for data at rest. These are features that have to be architected into the product and not added as an external after thought.
A fifth requirement is transparency, the ability for applications to see into the virtual infrastructure and be able to see the health of the underlying physical components that support their virtual storage, monitor their service level objectives, and be able to track their usage trends. This requires an integrated set of software tools that can gather the data from the infrastructure, correlate them to an application or server and present it through an easy to understand dashboard, with drill downs and report generation. Hitachi Data Systems provides this through the Hitachi Storage Command Portal.
In previous posts, I had listed a requirement for the management of storage virtualization to be independent of application, server and network management. What I meant is that storage virtualization needs to be done where the information for storage is available, and that is in the storage controller which is the target to the host initiators ( no need to proxy or crack FC packets), and where the information about cache slots and track tables for the data storage reside.   I am modifying that somewhat because of the need for storage, servers, application, and networks to work together. For instance, storage systems need to furnish providers to Microsoft VSS to provide synch points for snapshot copies, SRM adapters for VMware site recovery, reclamation providers for Symantec write same command, etc. So while storage virtualization is best done in the storage controller, independent of other elements in the infrastructure, storage virtualization should have application and OS awareness to support interfaces which enable better coordination between the storage and the rest of the infrastructure.

How can Storage Virtualization help transform the Data Center
Storage virtualization separates the application and server view of data from the physical storage infrastructure so that we can change and transform the physical storage infrastructure without disruption to the application. The first thing it can do is transform your legacy storage infrastructure without the need to rip and replace.  Once your FC storage is attached to the USP V or VM storage virtualization controllers and your applications are redirected to the virtual ports on the USP V/VM, your applications will be able to access their existing volumes through the high performance global cache of this virtualization engine and be able to leverage new capabilities like high speed distance replication for business continuity, dynamic tiering for life cycle management, wide striping and the latest high speed media for increased random performance,  thin provisioning to recover the waste of over allocation, VSS providers for synchronized snapshots, and SRM adaptersfor VMware site recovery or disaster recovery testing.
Most virtualized storage systems will see an increase in performance just by sitting behind the large global cache of the USP V/VM, but if you need more performance you can wide stripe your volumes or move them onto the tier one storage in the USP V/VM and do both. If changing configurations on your existing storage is disruptive due to static .BIN file changes as on EMC Symmetrix storage, set the configuration once and let the USP V/VM dynamically manage the configuration changes from then on. If you still have several years of useful life in your existing storage, but the warranty is about to expire, use that storage as tier 3 where the expense of tier 1 maintenance is not required, and convert to time and materials. If it makes more sense to replace the older storage with greener, more cost effective storage, you can do the migration without stopping the application.
If you are converting servers or applications as part of this transformation, you can create non-disruptive clones of the data for conversion, extract/translate/load, development test on lower cost tiers of storage, or dynamically spin up new allocations of virtual storage to support virtual servers. Not only can storage virtualization protect your applications and servers from changes in the physical storage infrastructure, but it can enable your applications and servers to change and grow dynamically.
To conclude, the Hitachi Data Systems approach to data center transformation with storage virtualization allows customers to consolidate resources, technologies and applications, and reduce the complexity of years of ‘bolting on’.   If the requirements discussed above are met, by design, you are building in future flexibility. This also greatly helps the bottom line.

Related Posts Plugin for WordPress, Blogger...

Comments (1)

[...] of servers with the movement to virtual servers and multi-core Intel processors. In the next post, I talked about storage transformation and the need for controller-based storage virtualization to [...]

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us


Switch to our mobile site