HCP Announcement with an Archiving Angle
by Ken Wood on May 2, 2012
Last week’s announcement dovetails perfectly with my current series on data archiving, although the Hitachi Content Platform (HCP) team may not appreciate my angle based on all their accomplishments with HCP product that go above and beyond just archiving. Be that as it may, I can’t ignore an important feature of the new release of HCP, for me anyway. I have a passion for efficiency, especially when it comes to power and environmental efficiencies. Overall, the new release of the Hitachi Content Platform has increased its dominant foothold in the object store and cloud arena, with a richer set of enhancements and features designed to provide a world-class platform for managing the massive scale-out requirements of today’s explosive data growth.
A highlighted list of these new and enhanced features include:
Improved Operational Efficiency to Lower Costs
- Lower costs for large scale unstructured data storage and reduce overall energy consumption with HCP support of spin-down disk in Hitachi Unified Storage (HUS)
- Eliminate downtime with nondisruptive, online hardware & software upgrades
- Proactively address any bottlenecks or hardware issues before they impact SLAs with improved component and performance monitoring and email alerting
Greater Scalability and Reliability
- Improve economies of scale, reduce costs and maximize utilization with support for thousands of tenants and tens of thousands of namespaces per system
- Ensure service availability with advanced replication and failover capabilities
- Meet vaulting requirements with support for tape-based copies of objects
Robust Security and Control from Edge and Core
- Reduce risk and control access to content with new object access control lists
- Support corporate security policies with active directory integration
- Identify sets of related objects for information, action and automation with custom metadata search across system and custom metadata
HCP is already one of the densest data storage platforms in the industry scaling from a few terabytes up to tens of petabytes in a single system, easily and seamlessly. In fact, it’s this scalability and multi-tenant support that has transformed HCP into the premiere object storage platform for the massive scale-out of unstructured data management in the industry. HCP embodies our content cloud approach allowing organizations to store and manage billions of data objects while providing intelligence layers and policies to help index and search the data independently of the application that created it, expand and scale to match or exceed to the unstructured data growth rate organizations are experiencing, and protect data in the most cost efficient manner and of your choosing.
While I’m flaunting and listing the individual features and enhancements of the new release of HCP as isolated capabilities, the overall total result is an unstructured data storage platform that is flexible and agile, that scales to meet any demand, and which provides upper layer capabilities embedded directly in the platform. This advanced combination of feature sets and reliability, without the traditional management complexity, is unique in the industry.
Actually, the flexibility of HCP may cause a slight retro-effect. HCP and its predecessor, HCAP (Hitachi Content Archiving Platform), has always been the platform of choice for compliance based archiving. That is, customers that HAVE TO archive data because of compliance laws and regulations of their respective industries. HCP can be configured to erase and/or digitally shred data when it is legally time to, and keep data safe and immutable in the meantime. For the data archiving use cases of HCP, there are many customers using HCP for long-term data archiving, but many more customers use HCP for shorter-term archiving based on regulation requirements and retention timeframes.
So, this is a series of articles on archiving data, specifically, long-term data archiving. Now, when configured with the new HUS storage platform of products, HCP is more cost effective and environmentally friendly with the support of disk drive spin-down for overall power savings over the life of data. Data stored in HCP can be stored on HUS storage platform that can spin its disk drives down consuming less power and generating less heat, thereby saving on datacenter cooling power as well. For data that needs to be stored for a long period of time, this new feature can have a significant positive impact on the Total Cost of Ownership (TCO) over the lifetime of the data where power and cooling costs add up to a significant operational cost over time.
At one point in time, Massive Arrays of Idle Disks (MAID) was once a technology, coined by Copan Systems, captured the imagination of many, but failed to fulfill the promise. Since HCP maintains and stores metadata and data differently, the requirement to continually reference storage for even the slightest metadata reference is negated. So while metadata can be searched, queried and referenced actively in HCP, the actual data does not have to be active or accessed. In an active archive or data repository used for research or as a library of information, searching through indexes, metadata queries, and custom metadata searches and queries will be the majority of the activity in these types of systems. The actual retrieval of data based on search requests, will most likely be the last task performed.
HCP lays down a critical foundation for future ways of designing cost efficient data systems for large active data archives, research libraries and other long-term data repositories. While HCP lists an impressive number of significant enhancements to simplify the complex task of managing massive-scale data repositories, disk drive spin-down support is my personal favorite. With environmental impact and energy costs on the forefront of many people’s minds these days, this exciting feature allows the current explosion in global data growth to also have a positive global effect.
Nice work. Great point about MAID not really being powered down much.