United States
Site Map Contacts Hitachi Global Community
Hitachi Data Systems Hitachi - Inspire the Next

Hitachi Data Systems Blog

Home > Corporate > HDS Blogs > HDS Bloggers > The HDS Blog
Products, Solutions and more

Data Center Advisors

Technical Memes in 2014

It is getting to be that time of the year again. Time to make predictions on various risk levels, about what is going to happen next year.  For 2014, my colleague Hu and I are going to split up the task of creating a list of technical trends and themes. Our message will span across a couple of cross-linked blog posts with Hu aiming at things that we should see come to pass soon and I’ll focus on memes that are likely to take hold.

With that said, let’s dig in!

1. EMERGING EXA-SCALE ERA

I’ve talked about this before in my Financial Forum series. In this post I used the Square Kilometer Array to motivate what kinds of sweeping changes are going to be required to achieve an exa-scale architecture. Since then, there have been bets against the emergence of such an architecture emerging by 2020 and several active groups and organizations deliberately planning for such a platform. I’m pretty sure that in 2014 we will see heightened debates on the possibility of such a platform arising on, before, or after 2020.  So my tangible prediction is that the key word “exa-scale” will become hotly debated in 2014.

2. THE BI-MODAL ARCHITECTURE

Let’s face it, as fast as our LANs, MANs and WANs are they are still roughly an order of magnitude slower than the internal fabrics and busses of the storage and compute platforms; Compute and storage fabrics/busses are measured in gigabytes per second and networks are measured in gigabits per second. What to do? What we’re starting to see emerge is richer storage control and low latency access within the server. Today this is acting as a cache, but tomorrow who knows…  I referenced this in the Bi-modal IT Architectures discussion on the Global Account and Global Systems Integrator Customer Community. For completeness, I’ve pulled the diagram in from that discussion  to illustrate that a key driving force in the change is the shift in software architectures. The diagram suggests a kind of symbiotic relationship between an evolving software stack and the hardware stack. My expectation for 2014 is that we will see one or more systems that implement this kind of architecture, though the name may be different (I  endeavor to cite references to them throughout the year).

3. PROCESS, ANALYZE AND DECIDE AT THE SOURCE

The Hadoop crowd has half the story right. The other half is that to support the Internet of Things where “the data center is everywhere” (thank you Sara Gardner for this quote) and low bandwidth unreliable WAN pipes are the norm, moving the data from edge or device to core isn’t really feasible. Further, today many EDW and Hadoop platforms in practice move data significantly over the network. For example, I’ve talked to several customers who pull data out of an RDBMS, process it on Hadoop, and push back to another RDBMS to connect to BI tools. This seems to violate one of the basic tenants of the Hadoop movement: bring the application to the data. Therefore it is necessary to augment data sources with intelligent software platforms that are capable of making decisions in real time, analyzing with low latencies, and winnow/cleanse data streams that are potentially moved back to the core. Note that in some cases the movement back to the core is by acquiring a “black box” and literally shipping data on removable media to a central depot for uploading.  This suggests that perhaps that a sparse, curated information model may be more relevant for general analysis/processing than raw data.  I digress. For 2014, I predict we will begin to see platforms emerge that start to solve this problem and an increased level of discussion/discourse in the tech markets. We have been calling this “smart ingestion” because it assumes that instead of dumb raw data there is some massaging of the data where the user gains benefit from both the “massage” and the outcome.

4. THE PROGRAMMABLE INFRASTRUCTURE

Wait. Did we just cross some Star trek like time barrier and go back to the era of the systems programmer? Is the DevOps practitioner really a re-visitation of a past era where the mainframe ruled the world? Likely not, but in the spirit of everything old being new again perhaps there is a sliver of truth here. To me, a key center point for the Software Defined Data Center (SDDC) is programmatic control of at least compute, network and storage. In effect, what application developers are really asking for is the ability to allocate, directly from their applications, these elements and more to meet their upstream customer requirements. Today the leading movement in the area of the Software Defined Data Center is the OpenStack initiative and community that surrounds it. We’re definitely far from complete control of the IT infrastructure from application developers, but I think that we are surely on that trajectory. A key aspect behind programmatic control is a reduction of the complexities and choices that application developers can select from and a fundamental reality that almost everything will be containerized in virtual infrastructure of some kind. By giving these things up, DevOps proficient developers will be able to quickly commission, decommission and control necessary ICT elements. In fact, I know of at least one customer that has had an application development team realize exactly this fact. What has happened is that the application team was being very prescriptive to the IT organization while at the same time authoring much of their next generation application stack on a public cloud. At some point, two things occurred: the cloud service could not meet their requirements and engineering realized they traded complete flexibility for speed to market and they liked it. The result was that the IT organization used OpenStack to build a private cloud so they could host engineering’s new application. This is a great “happily ever after” moment and I think hints at things to come. My prediction here is that we will begin to see OpenStack-friendly private cloud infrastructures for sale within the coming year. Since this is the most direct prediction I’m keeping my fingers crossed.

 

 

 

 

 

 

 

 

 

5. MEDIA WITH GUARANTEED RELIABILITY

As we’ve talked to customers, contemplating exa-scale systems, we’ve found they are reconsidering everything in the stack including the media. For a subset of these users tape and disk won’t cut it and they are in fact looking towards optical media of all things. Their constraints and thinking around power consumption, floor space, and of course extreme media durability coupled to specific requirements to guarantee data preservation, in some cases for 50 years or more without loss, this means that existing approaches won’t do. As it turns out there could be a perfect storm for optical notably with the maturation of standards, media density roadmaps, customer need, and emerging capacity in the supply chain, I argue that for specific markets optical is poised to make a comeback. Therefore both HDS and Hitachi are opening the dialogue through activities like the 2013 IEEE Mass Storage Conference or Ken Wood‘s post on US Library of Congress Update: Designing Storage Architectures for Digital Collections. We aren’t the only ones paying attention. Companies like M-Disc, for example, are pushing forward a thought pattern of really long term media. They articulate this argument well on their website:

“M-Discs utilize a proprietary rock-like inorganic material that engraves data instead of using reflectivity on organic materials that will eventually break apart and decompose with time. Furthermore, did you know that M-Disc technology is already being adopted worldwide by all major drive manufacturers, and that the M-Disc Blu-ray is read/write compatible with all current Blu-ray RW drives? While it is important to note that gold has a lower reflectivity than silver, even silver discs are still made of organic materials that may begin to lose data after only 2 years. See: Archival DVD Comparison Summary.”

As to a prediction… I think in this case we’ll begin to see the re-invention of the optical industry starting in 2014 with a focus moving from the consumer towards the enterprise. It wouldn’t surprise me if you even see the careful introduction of an offering or two.

CREDITS

1.Blu-ray icon/image

2.Ethernet cable image

  1. Hadoop latte image

4.Calendar/Reminder icons

URL: H2O Icon Theme KDE-Look.org

Related Posts Plugin for WordPress, Blogger...

Comments (0)

Post Comment

Post a Comment





Michael Hay

Data Center Advisors

Connect with Us

   

Recent Videos