United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Top Ten Trends for 2011

by Hu Yoshida on Nov 30, 2010

globeEach year, I like to take a look at the storage industry and provide my thoughts on what I think will be the key trends to watch for in the coming year. So, as 2010 comes to a close, here are my thoughts on the top trends for 2011 that we’ll see around the transformation of the data center.

Storage Virtualization and Dynamic Provisioning acceptance will accelerate as it becomes the foundation for cloud and for dynamic, high availability data centers. Storage virtualization, the virtualization of external storage arrays, will provide the ability to non-disruptively migrate from one array to another and eliminate the costly down time required to refresh storage systems. Dynamic Provisioning enables storage to be provisioned in a matter of minutes, simplifing performance tuning with automatic wide striping, and enabling on demand capacity for an agile storage infrastructure.

Closer integration of server and storage virtualization will be required to increase the adoption of data center virtualization. Server virtualization has matured beyond the cost reduction phase of consolidating print, file, test, and development servers and is currently poised to support tier 1 application servers. Moving forward, for support of tier 1 applications, server virtualization will need the integration of enterprise storage virtualization arrays that can offload some of the software I/O bottlenecks like SCSI reserves, and be able to scale to meet the high availability and QoS demands of enterprise tier 1 applications.

Virtual tiering will be adopted for data life cycle management. Currently, virtual tiering has the ability to assign a volume to a pool of storage containing multiple performance, cost, tiers of storage and has the intelligence to move parts of that volume to different tiers based on access counts. The user does not need to classify a volume and assign it to a tier of storage, nor move the volume up and down the tiers based on activity. Virtual Tiering, or Dynamic Tiering, will do it automatically without the need to classify the volume and move the entire volume from tier to tier.

The time is right for SSD acceptance for higher performance and lower cost in a virtual tiered configuration. Since 80% or more of a volume is usually not active, only a small amount of SSDs need to be in Tier 1 to serve the active parts of a volume while the majority of the volume can reside on lower cost SAS or SATA drives. A multi-tier storage pool that contains a small amount of SSD offset with a large amount of lower cost SAS and SATA drives could cost less than a single pool of SAS drives with the same total capacity and provide 4 to 5 times the IOPs.

Serial Attached SCSI (SAS) will be adopted for increased availability and performance in enterprise storage systems. Unlike Fibre Channel (FC) loops, which are used to support FC drives on older storage systems, SAS is a point-to-point protocol. FC loops require each drive on the loop to arbitrate for access to the loop which causes contention. If a faster drive –like an SSD drive — is connected to the loop, it could drown out the loop so that the other drives could not get access. Since SAS drives are 6 Gbps and most FC loops are 4 Gbps, SAS has a performance advantage with its faster speed and point-to-point access. Since SAS is point-to-point, it is easier to identify a drive failure, as opposed to FC loops, which requires a query of each disk on the loop until the bad drive is found.  SAS is also compatible with SATA. The only difference has to do with the ports – SAS is dual ported while SATA is single ported. In Hitachi storage arrays, SAS expanders are used as switches for the point-to-point connection. While IBM uses SAS drives in their DS 8800, they connect SAS drives through FC to their controllers. The drive vendors are quickly converting to SAS, for lower cost, performance and reliability.

Small Form Factor Drives (SFF), will become prevalent for their power and cooling efficiencies. SFFs are 2.5 inch drives, which consume about 6 to 8 watts of power, as compared to Large Form Factor (LFF) 3.5 inch drives, which consume about 12 to 15 watts. This has a dramatic reduction in power and cooling, with an additional saving of floor space. Several vendors package 24 SFF disks in a drawer that is 2 U high and 33.5 inches wide. Hitachi changed the packaging on the AMS and the Virtual Storage Platform (VSP) so that the packaging is even denser. Instead of a drawer with all the drives mounted in the front, the AMS has a dense drawer with 48 drives that is 3 U high and 24 inches wide. The drawer pulls out for servicing with all 48 drives spinning. On the VSP, we have a disk module with 80 x 3.5 inch drives or 128 x 2.5 inch drives that is 13 U high and 24 inch wide. The disks are serviced from the front or from the back.

Cloud will be accepted as a valid infrastructure model. Although some hype will still be associated with “cloud,” there will be enough proof points to prove the concept. On-ramps to the cloud will facilitate the acceptance as well as management tools and orchestration layers that provide the end to end transparency to ensure service level objectives and chargeback.

Convergence in the data center will begin to take off. The convergence of server, storage and network infrastructure will make it simpler and faster to deploy applications. The use of server, hypervisor, storage, and network virtualization will be key to providing an open platform to ensure investment protection and customer choice.

Increased application transparency into a storage virtualization or cloud infrastructure will be required by applications. Without this transparency, application users will not be able to know if their service level objectives are being met, how to determine chargeback, how to plan their utilization, or the health of their infrastructure. Management software should provide a business unit or application dashboard in which an SLO is defined and persisted across configuration changes. The dashboard should show the status of the SLO, the actual allocation in terms of disk, RAID types, and storage ports, the health of the array groups and host links, and utilization of the allocated capacity over a selectable time frame.

Remote managed services will be provided to offload the lower level monitoring, alerting, reporting, and management tasks that are limiting IT operations from moving to new technologies. For the past 10 years, the mandate for IT has been to do more with less, and operations staffs are overworked just to maintain more of the same. In order to transform the data center, the IT staff must find the time to train, plan and execute.   A group of IT experts operating out of a Service Operations Center using remote management tools can leverage their skills across multiple installations at a very reasonable cost and drive higher and quicker return on asset investments.

What do you think of these trends? Are there others you see emerging?

Related Posts Plugin for WordPress, Blogger...

Comments (9 )

Vinay Babu on 01 Dec 2010 at 4:22 am

We might also look at replacing the native RAID’s with the new ones. Since these RAID’s are there in the enterprise storage for more than 10 years. Its time to go ahead and innovate so that it can answer all the overheads or constraints.

MRR on 01 Dec 2010 at 9:08 am

Nice view of the storage industry, I do agree.

Hu Yoshida on 02 Dec 2010 at 7:47 am

“What do you propose as one of the new RAIDS?”

Vinay Babu on 02 Dec 2010 at 8:19 am

New RAID technologies which has been used by the XIV, 3PAR etc. seems to be promising. But they have their own disadvantages like rebuild times, data inconsistency during multiple drive failures etc…
We need to think of a RAID technology which should allow adding disks to the group dynamically since current RAID types has restrictions of limited disks. Now we have more powerful virtualized environment with multi core processors, most of the processing power underutilized not only in native OS based HW but also in hyper-vised HW. We should offload some of the storage tasks to the clustered CPU cycles so that optimum use of storage array can be delivered.

Ramesh Mudumba on 08 Dec 2010 at 4:31 am

I feel these are absolutely the true trends that will surface as we step into 2011 and get firmed up in the course. I do feel there is a space for emerging trends and exceptions from the industry on data security in a SLO assisted, Cloud storage infrastructure that is supporting classified I/O workloads.

David Sacks on 21 Jan 2011 at 1:04 pm

Hu,

(Disclosure: I am an IBM employee.) The section in your blog about Serial Attached SCSI (SAS) briefly mentions that the IBM DS8800 “connect[s] SAS drives through FC to [its] controllers”. You might want to be aware of the following:

o Like previous DS8000 models the DS8800 uses internal switches to connect to drives, providing point-to-point access to individual drives, avoiding arbitration overhead, and facilitating identification of a failed drive.

o In the case of the DS8800, the switches connect FC-AL backbone paths to SAS drives.

o These FC-AL/SAS switches connect SAS drives to the system’s back-end device adapters (aka controllers or directors) via 8Gb/s FC backbone paths, providing nominally 33% faster shared path speed than systems using shared 6Gb/s SAS paths.

This information is publicly documented in “IBM System Storage DS8800: Architecture and Implementation”, IBM redbook SG24-8886. (See http://www.ibm.com/redbooks)

Hu Yoshida on 21 Jan 2011 at 5:09 pm

Thank you for your comment, David. FC-AL and Switched point to point seem contradictory. I will check your reference.

Ripunjaya Rawat on 22 Jan 2011 at 7:43 pm

Dynamic Provisioing looks very exciting, however I have few doubt ,

This will simply move the data across the tiers (among SSD,SATA, SAS) without knowing the data structure depend on the IO load.

This is something same diffrence between snapshot backup and traditional backup of database backup snapshot does not understand data structure.This may be a challange while restoring in some cases.

Please suggest!
Ripunjaya Rawat
CSC India

Hu Yoshida on 02 Feb 2011 at 2:02 pm

Hello Ripunjaya,

Dynamic provisioning is about providing the ability to provision a volume dynamically when it is allocated with virtual capacity and only provision the real capacity when the data is actually written. Dynamic provisioning also automatically stripes the data across the width of a pool of disks to provide the benefit of wide stripe performance across many more arms than a traditional RAID group. It is not a snapshot or copy. It does not understand the data structure on the volume which is no different than any other volume that you backup and restore.

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Recent Videos

Switch to our mobile site