Additional requirements for storage virtualization; multi-tenancy, transparancy, and scalability
by Hu Yoshida on May 10, 2010
SNIA defined storage virtualization in 2001 and focused on two important requirements. First was the abstraction of storage functions to enable application and network independent management of storage and data. Second was the application of virtualization to add new capabilities to lower level storage resources.
From the SNIA Dictionary
1. [Storage System] The act of abstracting, hiding, or isolating the internal function of a storage (sub) system or service from applications, compute servers or general network resources for the purpose of enabling application and network independent management of storage or data.
2. [Storage System] The application of virtualization to storage services or devices for the purpose of aggregating, hiding complexity or adding new capabilities to lower level storage resources.
There are three other requirements for storage virtualization which Hitachi believes are important for a viable storage virtualization solution, safe multi-tenancy, transparency, and scalability.
Safe multi-tenancy enables multiple applications to share a common pool of virtualized storage resources without the danger of exposing their data to unauthorized access or exposing their service level objectives to someone else’s bad behavior. This becomes increasingly more important as virtualization is extended into the cloud.
Transparency enables applications to see into the virtualized storage space to see the status of their service level objective, the physical storage that is actually allocated to their storage request, the current status of the health of their storage resources, and the trend line of their storage usage. This transparency ensures the application that it is receiving the right level of service, is aware of any problems with the physical infrastructure, and can track its utilization of storage for future capacity planning.
Scalability is becoming a much greater requirement as we see servers scaling up with multi-core processor technology, filling up with multiple virtual machines, and increasing network bandwidth with 8 Gbs FC and 10Gbs FCoE. Storage will have to scale up as well as scale out even as it scales in depth with storage virtualization. In addition to the external pressures from server workloads, storage systems will have to scale internally to support additional functions like dynamic provisioning, tiering, migration, and distance replication. The only way to address the scale up and scale out demands of storage virtualization is to provide a large, tightly coupled, multi-processor as the engine for storage virtualization.
In summary storage virtualization will need to support the following requirements to address current and future storage and application demands.
1. Application and network Independent management of storage infrastructure
2. Enhance existing storage assets with the latest enterprise storage functions
3. Safe multi-tenancy to leverage shared storage resources across multiple applications
4. Transparency to provide applications with the ability to track their Service level objectives
5. Scalability to meet increasing peak demands
In future blog posts, I will show how Hitachi is addressing these requirements.
I really would like to hear further about the topic; i have been working with the XP storage works for a while now and it proved to be one of the best mega carriers in the market today ; but i really got lost with the transparency requirement for the storage…. virtualization is a requirement and it means abstracting the storage resources from the application level, isnt storage transparency the complete opposite?
using the word transparency to describe application ability to track their SLA’s is a bit misleading, but application level integration with storage systems is a must and has become a derivative …. having a block minded storage that cant sense application throttling needs is just not enough these days.