Forrester says you don’t need a SAN
by Hu Yoshida on Dec 16, 2008
Andy Reichman of Forrester, published a report on December 4, entitled: Do you really need a SAN anymore?
Andy makes the case that the promise of SAN to increase utilization, provide simplicity, performance and availability has not been realized. According to Andy, the reality of SAN is cost, complexity, SAN islands and incompatibility. The solution he offers is application-centric storage. “To regain control of storage and get better results, application vendors are starting to subsume storage functionality into the application itself, giving IT buyers the option to spend less on commodity storage and get their high-value features from the application.”
While I do not totally agree with Andy that SANs are not needed, his premise that SANs have not lived up to their promise has some support. Studies of actual utilization of SAN storage show 20-30% utilization. Other studies show that SANs are the third leading cause of application failures. However, SANs are helpful as a connectivity and consolidation technology for storage and played an important in part in the economic recovery after the dot.com bust in the early part of this decade. Prior to the adoption of SANs, data centers were busting out of their walls with an avalanche of 1 and 2 TB storage frames that were direct attached to servers. They were connected over fat 50 or 68 pin connector SCSI cables, with limited distance and limited performance of 20 Mb/S. The introduction of FC protocols and FC switches provided the connectivity to consolidate a large number of direct attached storage frames to a fewer number of 18 or 20 TB FC storage frames. All of a sudden, performance improved with Gb/s transfer speeds, and data center floor space was no longer a problem as the number of storage frames was reduced to a fraction of what it was before. With fewer storage frames, storage administrators could manage 5-6 times more capacity than they did with direct attached storage.
So why didn’t the utilization of storage increase? While SANs enabled servers to be networked to larger storage frames, SANs did not provide data mobility between the storage frames. In order to move data from one storage frame to another, or even to move data within a storage frame from one type of disk group to another type of disk group, an application had to read from one and write to the other. While servers could connect to any storage in the SAN, the storage was still an island within the SAN.
“SAN-based storage Virtualization adds no value”
SAN-based storage virtualization does not help the utilization problem since copies and moves of data across the SAN must still be done by reading and writing the data from one storage frame to another. If you are going to read and write data to copy or move it, that could be done in the application and in a SAN-based appliance. SAN-based virtualization appliances also creates a bottleneck in the SAN and increases complexity with the need to map physical extents into virtual extents and add SAN zones for separation of the server and storage sides of the virtualization appliance. Andy dismisses SAN-based storage virtualization as adding no value to the SAN.
Recently I had a conversation with Andy to explain the difference between storage controller-based virtualization and SAN-based virtualization. The former is not dependent on the SAN for virtualization of storage. This approach can use existing LUNs on external storage systems without the need to remap them. Since it resides outside of the SAN it can virtualize direct attach storage, even mainframe storage, as well as SAN attached storage. This virtualization approach can increase utilization since it enables movement of data across external storage frames by redirecting the cache images of LUNs for copying, moving, replicating, and migrating data without disruption to the application. It also enables the dynamic and thin provisioning of external storage to eliminate allocated but unused storage capacity and simplify the provisioning of LUNs.
The Hitachi USP V/VM controller-based storage virtualization can be used with application centric storage servers with commodity storage behind it. With 224 FC storage ports, it does not require a SAN for connectivity in many cases. The USP V/VM can enhance application centric storage servers by providing a common pool of storage for consolidation of storage resources. This pool of storage can be partitioned dynamically for safe multi-tenancy and QoS. Since applications have peak and off-peak requirements for storage resources, policies can be set to optimize the utilization of storage resources through dynamic partitioning and dynamic provisioning. The problem of allocated unused space does not go away with application centric servers and provisioning of storage will still take hours for allocating and formatting. This can be solved with Dynamic Provisioning. Instead of each server burning cycles replicating data across slow IP networks, the USP V/VM can provide data mobility to multiple tiers of storage at FC speeds.
While application centric storage servers can subsume many storage functions, there are still a lot of functions that can be more efficiently done in a consolidated, enterprise, storage system where it does not impact the performance cycles, memory, or management functions of the application. There are also some functions that can only be done at the storage controller level, such as the ability to replicate a time consistent group of volumes that is servicing multiple applications servers.
It is always tempting to go for lower cost commodity storage to reduce CAPEX, but commodity storage lacks the high value functionality and availability that can reduce OPEX — which is the more significant part of the TCO equation. Is it better to multiply the OPEX cost to a number of application centric storage servers, or consolidate it onto an enterprise storage system?
So do you really need a SAN anymore?
In some cases the answer may be no, as long as you have the speed and connectivity to support the application’s needs. However, you will need to have storage virtualization to reduce OPEX. The USP V/VM makes it possible to have storage virtualization without a SAN. In the future DCE, Data Center Ethernet, may replace the SAN. So a storage virtualization approach that can support DAS or any type of network is your safest storage investment.
The other question that is embedded in this report, is can you cover all your application storage needs with application centric storage servers with high value functions and get by with commodity storage. The answer here is no, unless you have a high value storage virtualization controller in front of the commodity storage.
I will be providing more reasons for this answer in my next post.
Comments (5 )
An additional item that has occurred in enterprise data-centers is that while SAN storage provided consolidation and a chance to better utilize data-center storage, the storage administrators didn’t have the operational discipline to manage the storage any different than they did DAS. Over-provisioning storage to applications because of poor planning by the System Administrator and the Storage Administrator (many times the same person) causes large areas of unused storage that can’t be utilized by other storage systems. Thin Provisioning takes care of this today, but storage systems that provide that level of virtualization can be counted on one hand.
Very interesting and good points. Another issue that the above comment points to is the difference between SAN technology and data management practices. While I have not read the report, it looks like Andy may have mixed data management process issues (which impacts SANs, DAS, AND NAS) with SAN technology issues.
I am also interested in how the report talks about “utilization.” To me, utilization is how often data is referenced or used. “Allocation” is really a measure one should look at – how efficiently you use the storage you buy. Often business processes can cause storage inefficiencies and lead to allocation issues more than what type of storage you deploy.
Your focus on OPEX is spot on – and tools like virtualization and provisioning are critical in reducing OPEX. But data management “people processes” should be evaluated just as stringently as storage technology.
[...] Hu Yoshida discusses the Forrester report in a recent blog (Forrester says you don’t need a SAN) analyzing the challenges of traditional SAN approaches – and there is a real discussion to be had on the subject but – in my view – the Forrester report doesn’t offer a realistic or practical approach to solving the very real cost and complexity of data center storage. [...]
I think Andy is clueless about SAN technology. My company actually paid Andy to come in and assist with a storage strategy and after a couple of months of nothing his report stated that our company should have a minimum of two or three storage vendors. WHo would propose such a strategy?
To all of us who understand server virtualization and thin provisioning (two areas that Andy is obviously not familiar with) realize that as physical servers consolidate to virtualized servers (a ration of 33:1 in our shop) that utilization is going to become more efficient.
Hu, I don’t want to use your blog to crack on Andy too badly, but remember Andy and his side-kick Stephanie used to work for EMC when their sales tactic was SAN-in-a-CAN.
I get sick when I read research reports like this from people who haven’t managed a storage environment in over 10 years and have no idea what those of us who do are up against on a daily basis. The true dis-service is when our pruchasing people read this garbage and begin to re-question our credibility and everything else we have worked hard to establish.
[...] The title itself has generated some industry buzz for obvious reasons, and several blog posts from SAN providers. Check out Chuck Hollis’ (EMC), Hu Yoshida’s (HDS), Tony Asaro’s, and Chris Evans’. [...]