United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Top Ten IT Trends for 2014- Trends 3 and 4

by Hu Yoshida on Nov 26, 2013

Trend 3: Greater Adoption of Private cloud

My selection for trend 3 is the greater adoption of private cloud for greater business agility and business control.

Cloud is becoming a more accepted service model. In a recent survey of large accounts, nearly 10 percent of workloads are now being run in the cloud. Software as a Service (SaaS) seems to be the more popular use of public cloud for back office applications like, email, HR, CRM, and archive or backup. Infrastructure as a service (IaaS) in the public cloud is often used for elasticity to offload temporary demands like test and development or seasonal peak workloads. However, the use of public cloud for primary business applications is still regarded as high risk due to security, privacy, quality of service, outages, and high cost concerns when processing applications and accessing data across the pipes connected to the pubic cloud. While infrastructure costs may be very low for initial storage, it can escalate very quickly due to costs incurred because of frequent access to this offsite data. The recent demise of public cloud provider Nivanix shook confidence in the public cloud when they went bankrupt and announced that their customers had 15 days to recover all their data! This brought back memories of the dot.com bust when the capital costs of the services the dot.coms provided could not be recovered fast enough because each user they signed up wanted their own infrastructure. Tools like virtualization and thin provisioning were not available at that time to enable them to leverage the infrastructure across multiple users and spread the capital cost.

For many reasons like this, customers are looking to host a private cloud on their site, behind their firewalls, and under their control, for their primary applications. New technologies such as virtualization, converged solutions, ISoftware Defined Data Center, and new business models like managed services are making the implementation and operation of a cloud provisioning model much simpler, more efficient and affordable. Hitachi has partnered with VMware to realize their vision for the Software Defined Data Center. The Hitachi Unified Compute Platform with Command Director is integrated with VMwares’ vSphere to provide server, storage, LAN and SAN, the whole infrastructure, provisioned and automated through vCenter. Hitachi can also include a portal for self-provisioning and charge-back to complete the cloud picture. This can be delivered and set up with power and network connections in a couple of days, and from then on provisioning of virtual machines, complete with storage LAN and SAN, can be done in a few hours. Adding additional storage, servers and switches can be done non-disruptively.

See what Wayne Green, VMware product manager, has to say in his recent blog post about the integration of the UCP and vSphere.

“What makes Hitachi’s approach impressive is that they have chosen to aggregate operational data from each of the converged platform components. Having this single source of data greatly simplifies the complexity of the integration, but more importantly it helps drive a user interface design that truly reflects the converged nature of the platform. This approach to integration hits right at the core of the expected benefit of converged infrastructure.”

While a private cloud may not offer the elasticity of a large public cloud provider who can spin resources up and down to meet peak demands without the procurement delays that may be associated with private clouds, you know that the cloud is secure behind your firewalls and is under your direct control. You have the automation tools and agility to provision your resources as your business dictates. Your direct connection to the private cloud may also offset some of the costs of connecting to a public cloud, and integration of applications with non-cloud applications is much easier. You can still use the public cloud for back office application as you do today. For instance you might backup or archive some portion of your private cloud to the public cloud. The difference is now you have the tools to match the cloud to your business needs.

The mystery and complexity of cloud security and risk is laid to rest with an implementation of a private cloud, by providing the benefits of consolidation, agility, automation, self service and charge-back in a ready-to-use package.

Trend 4: Big Data Explosion drives PB file capacities

The explosion of unstructured data being driven over the network is what drives Trend 4. This will require network file systems to scale to petabytes, drive much higher IOPs, and increase efficiency through deduplication and dynamic tiering.

Analysts like IDC are predicting that storage attached through NAS will exceed all other protocols by 2015

NAS deployment is also being driven by server virtualization, since it is perceived to be easier to deploy and manage than SAN storage. NAS facilitates the management/backup/restore of VMs and can operate on individual VMs rather than LUNs containing multiple VMs. Where SAN storage used to have a performance advantage over NAS, new enhancements like 10 Gbps Ethernet and enterprise flash have closed the gap. However, traditional file servers and traditional file systems are running out of gas when it comes to servicing the demands of random I/O that is being driven by virtual servers as they continue to scale up with an increasing number of VMs on multicore servers.

In order to meet these increasing demands, NAS storage capacities must be able to scale to PBs with file sizes in the hundreds of TBs, and still provide hundreds of thousands of IOPS with less than 1 MS response time. New enterprise flash technologies like Hitachi’s 1.6 and 3.2 TB Flash Module Drive have helped to increase the performance of Hitachi’s HNAS.

A recent SPECsfs2008_nfs.v3 test produced 298,648 Ops/Sec (Overall Response Time = 0.59 msec) with a two node Hitachi HNAS 4100 and 32 x 1.6 TB FMDs.

In addition to order of magnitude increases in capacity and performance NAS systems must support greater efficiencies through primary dedupe, and intelligent tiering to move inactive files to lower cost disk media, tape, content platform or to the cloud without any impact to performance. NAS storage must also support the storage API’s and interfaces from hypervisors, databases, cloud, and other applications for greater efficiency, availability and data protection, and provide encryption for data at rest including data on flash modules.

NAS virtualization is also important for NAS migrations and technology refresh. It is almost impossible to migrate or refresh a NAS server that has a very large, active directory. You may not be able to move the files faster than they are being created. If you can virtualize the old NAS server behind a new NAS server, you can service the creation of new files while you migrate the old files in the background.

Conversion to new NAS storage technologies that support Big Data scalability and virtual server requirements with higher capacity, lower cost, higher performance, and greater efficiency will be in demand for 2014.

See full list of my top ten trends for 2014 here.

Related Posts Plugin for WordPress, Blogger...

Comments (0)

Post Comment

Post a Comment





Current day month ye@r *

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site