The economic impact of 3D scaling
by David Merrill on Sep 27, 2010
During the last three weeks, I have worked in Shanghai, New York and London. Every place, it seems, I hear a common theme with our customers that has something to do with the “transformation of the data center.” This theme has a slightly different twist depending on the customer, but there is common thread in what this new vision contains.
- Flexibility with applications, licensing and provisioning rules
- An interest in not buying software licenses, but paying on a per-use subscription
- Flexible infrastructure — servers and storage where again capacities are billed based on use
- More blurring between servers and storage, and even the elimination of the network in-between (see PCI Express) the two
- Less view and measurement of storage capacity, and more on I/O and application performance
- Scale-up and scale-down demands to meet peaks and valleys in workload
- More automation in the placement, construct and lifecycle protection of data
I recommend a read of Hu Yoshida’s 10-part blog series on Data Center Transformation for more details and insight.
Some have labeled this transformation as “cloud”, and there is some merit with local private and remote/shared cloud offerings. I continue to spend a good amount of time understanding and developing the economics of first generation storage and compute cloud architectures (Hadoop, Azure), and can see different types of inflection with regard to scale, cost, flexibility, protection and management. This new data center transformation cannot sacrifice one dimension for another.
For many, a new/transformed data center will demand scalability at capacity. It also will demand performance within servers/apps, storage and networks while maintaining high availability, demanding simpler management and the absolute demand for lower TCO.
This week, HDS announced a new line of storage, along a continuum of virtualization, high availability and higher performance that will be a key element (not the only element) in the transformed data center. The Hitachi Virtual Storage Platform (VSP) offers high density storage, with high performance, massive frame consolidation potential, lower overall environmental requirements with the continued feature of thin provisioned volumes, and external heterogeneous virtualized arrays. The steps toward a transformed infrastructure are noted below:
1. This architecture can accommodate consolidation – at very high performance. Years ago, we loved to push new faster, bigger boxes and would justify them based on the ability to consolidate other boxes. This need for footprint reclamation is needed now more than ever. Not only can capacity and data storage be consolidated, but the front end performance is ideal for larger server consolidation and helping with VM sprawl. This new system is really optimized for local private (enterprise class) cloud architecture.
2. A single platform for different workloads and connections. Part of reason that we see server and array sprawl is the need to provision storage on different arrays for mainframe, iSCSI, NAS, lower tier SATA etc.
3. Extending the virtualization play. If I count right, this is HDS’ fourth generation virtualization system, and the benefits and need for storage virtualization is gaining more converts in terms of analysts and customers all the time.
4. Precision in tiering. With sub-LUN striping, there is a new precision in tiering that will change the way disks and LUNs are presented. No longer is your storage pool allocated to a single type of disk, these new pools can include disks from various disk types (SSD, SAS, SATA). The resulting economics are impressive on a price-per-tier level (at performance/capacity points).
5. Environmentals. This is one area mentioned in the press release - such as power, cooling, floor space and weight are all reduced with the VSP. For my customers in Asia, where the data center is on the 30th floor of a high rise, this new solution will bring a lot of carbon emissions relief to the GHG budget owners, as well as those with offices on the 29th floor.
From the USP-V and USP-V(M) point of view, the VSP is evolutionary. Customers that already are running with virtual, thin and tiered architectures will see moderate TCO improvements (based on environmentals, migration, power and cooling benefits). But when compared to older, static, thick and monolithic architectures — it will continue (along with its predecessor) to demonstrate revolutionary economics.
In my next blog entry, I will work through the math of a new sub-LUN tiering feature within VSP, and how this new feature presents lower cost of tiers whilst including high, medium and low performance segments within the LUN. It is pretty fascinating on how the cost of tiering will be turned on its head.
Comments (2 )
[...] white papers. Also, please read the blogs published by Michael Hay, Claus Mikkelsen, Pete Gerr, and David Merrill on why today’s announcements are so important to the [...]
[...] can find more information about our announcements here and blog postings from Hu, Claus, Michael, David, Ken and [...]