Why scale and intelligence are important to your data center
by Hu Yoshida on Mar 30, 2011
Next week, I will be traveling to Switzerland for a series of customer events and visits. I am starting in beautiful Geneva within sight of the majestic Matterhorn.
Today’s announcement by Cisco will give me the opportunity to talk with customers about our partnership with Cisco for addressing the increasing need for scalability and intelligence in the data center.
Here is the theme behind Cisco’s announcement today:
“Evolutionary Fabric, Revolutionary Scale: Achieving Dynamic, Cloud-Ready Data Centers” Delivering architectural flexibility for any application, any location in a secure manner on platforms that are converged, scalable and intelligent
This is a theme that Hitachi Data Systems supports wholeheartedly, since it is aligned with our strategy for the transformation of data centers to information centers. This type of transformation cannot be done by vendors acting in isolation. Cisco and Hitachi Data Systems are working together to create a unified fabric for the data center that focuses on scale and intelligence, and Cisco shows they are continuing to execute on this with today’s announcement.
When we talk about scalability, we need to go beyond the ability to non-disruptively add capacity and connectivity to a storage system. Scalability now requires storage systems to offload application, server and network bottlenecks through the use of API’s, like VMware’s VAAI, without impact to the performance of the storage system.
Instead of just using storage as containers of data, storage must now become storage computers with a global pool of processors that is separate from the port processors that handle the front and backend I/O processing. This is the architecture that was introduced with Hitachi Virtual Storage Platform. Scalability must also be able to scale to cover existing assets through virtualization so that storage systems that do not support functions like replication and VAAI, can be supported through VSP on the front end.
Intelligence also must be extended beyond the storage system. It is no longer enough for the storage system to report their performance and status to a storage administrator. Applications must be able to see beyond the virtual volume or file that is presented to them. They need to see into the actual infrastructure that that supports it so that they can be assured of their service level objective, manage their charge back, and plan for future requirements. That type of intelligence is provided by our Hitachi Command Suite of management tools. Convergence and virtualization is good for the data center, but intelligence must be provided to deliver safe multi-tenancy and transparency for the application user.
That is the theme behind the Cisco announcement, and the common architectural design of our storage systems and management platforms. What are your thoughts on today’s news? How does it affect you?
Comments (3 )
Good post. With all the noise on this announcement I find the post refreshing and provacative. I would add a little more meat on the bone. Overall very enjoyable “what it means” post…,
[...] in what he calls the battle of the partner blogs. My post in support of the Cisco launch was about scale and intelligence and he put it into a standalone category. While he thought it was a great read on thought [...]
I frequently read your blogs that I find very interesting. I also had the chance to assist to your presentation in geneva
I wanted to ask about automated tiering as it is implemented in the VSP , it seems to me that this notion is very close to the well-known caching principle.
As I understand dynamic tiering, it automatically promotes/demotes pages according to the I/O count. Cashing, promotes chunks of data to fast memory, to improve access time.
I am seeing here a convergence toward a “cheap” kind of cache based on SSD instead of RAM. What do you think ?