United States
Site Map Contacts Hitachi Global Community
Hu's Blog - Data Storage and Virtualization Thought Leader Hitachi - Inspire the Next

Hu Yoshida's Blog - Vice President | Chief Technology Officer

Home > Corporate > HDS Blogs > HDS Bloggers > Hu's Blog
Products, Solutions and more

Hu's Blog

Data Replication: When Push comes to Pull

by Hu Yoshida on Jan 17, 2006

A recent comment to Drunkendata by an end user Snig points out that due to the "inter-twangle" of applications, the biggest issue or problem for application recovery is that he can’t recover all his "inter-twangled" applications to the same point in time (or very close to it). This requires him to spend a lot of time once the data has been recovered getting the data between each system synchronized again.

He looked at different methods of recovery including message shipping, DB log shipping, host-based disk mirroring, and disk subsystem-based mirroring. In the end he came to the conclusion that disk subsystem mirroring was the best solution as long as you have all the disk for your applications on one subsystem (for time consistency). He also warned that vendors have different definitions of consistency groups, use different replication techniques, and their s**t is expensive.

Virtualization can address some of Snig’s caveats on disk subsystem mirroring. All the application disks no longer have to reside on one subsystem for time consistency. A virtualization engine in front can provide consistency across multiple heterogeneous disk systems. It also provides one definition of a consistency group and one replication technique across these heterogeneous disk systems.

In terms of this s**t being too expensive, we addressesd this with a replication approach which changes disk replication from the traditional push model to a less expensive pull model. Expense comes in terms of the resource utilization of the sending and receiving disk systems, the connection between them, and the management to support this replication. In the traditional push model, data is pushed by the primary disk subsystem to a secondary disk subsystem in defined segments. The primary site continues to hold the data in cache until the secondary site sends back an acknowledgment that the data was received, validated, and written to non volatile storage. In the mean time the primary site’s cache is filling up with other writes and his ability to process other applications is impacted. If the response is delayed or the pipe between the disk subsystems is too small to handle a peak burst or is interrupted in any way, data backs up in the primary site cache and punctures cache, which then requires some period of management effort to suspend and recover.

We changed this push model to a pull model so that the secondary site does the work of data replication while the primary site can concentrate its resources on the application. Instead of writing replication data to expensive cache memory, the primary site writes replication data to low cost disk with a time stamped journal. The journal is sent to the secondary site who then pulls the data based on the journal. Temporary delays no longer cause the primary site’s cache to backup, the data just offloads to disk and waits for the secondary site to catch up. The pipe can now be configured for the norm rather than the peak, reducing transmission costs. This is asynchronous replication, so there is possibility of data loss if the primary site is destroyed. However, the journal can recover to a consistent point in time and identify the data loss. The traditional push model is still required for no data loss or synchronous replication. Synchronous push and asynchronous pull models can be done simultaneously to support no data loss replication to a bunker site and asynch replication to an out of region site.

When I was a young father, my first task in the day was to feed my baby daughter before I went to work. I would "push" a spoonful of food to her and wait for her to consume it before I "pushed" another spoon to her. I was the busy adult anxious to get to work and do great things, but at that time I was gated by the performance of my 8 month old.  When she became older, I could place the food on the table, tell her the food was ready, and she would "pull" the food and feed herself while I went on to my day job. Now that she is an adult and lives across town, I give her meta data in the form of a recipe and she can recreate the meal from her local pantry. If this analogy holds true, perhaps that will be the future of data replication.   

Related Posts Plugin for WordPress, Blogger...

Comments (7 )

Snig on 23 Jan 2006 at 7:24 am

I guess I should be a little more eloquent when describing software costs from now on. I guess I could just say poo instead of (insert explicative here). ;-)

Well overall it was a good marketing synopsis of some of the advantages that HDS is bringing to the table for asynchronous replication.

Some of the things that I take issue with are:

1. “time stamped journal” – From what I understand you are only using time stamps for the mainframe based data being replicated. You know, the old IBM XRC. The open systems data is still held to an IOD method, but not using time stamps. Of course, I could be wrong so please clarify this for us.

2. “definition of a consistency group” – This is one of my biggest headaches from the disk array vendors. Everyone’s definition is different from each other and none of these consistency group concepts are perfect. My biggest issue with HDS’s implementation of consistency groups is that I cannot have mainframe data and open systems data in the same consistency group. Even though I have applications residing on multiple platforms (mainframe, unix, and windows) who all share data with each other, I can’t recover their all of their data to the same point in time. Now HDS has told us “you don’t need to worry about mainframe and open in the same consistency group”. “You’ll be replicating the data so often, you’ll be within seconds of each other at the time of recovery.” Now those systems are doing hundreds, if not thousands, of transactions per second, so I could be several units of work off at the time of recovery. Granted, this recovery method will be much better than what we have today recovering from tape, but it’s not perfect and I hope HDS will strive to improve upon this in the future. Consider this the gauntlet being thrown down for HDS to solve.

3. “have all the disk for your applications on one subsystem (for time consistency)” – I don’t believe in my comment that I ever stated that the data had to be on a single disk subsystem for “time consistency” as you put it, however I agree with this statement and applaud what HDS has done with the Tagma and it’s enablement of replication of heterogeneous storage. My primary reason for all of the data being on a single subsystem was for the huge cost savings for the replication software. Why by three replication licenses when I can buy a single license?

4. The overall cost of the software for these disk subsystems is totally out of whack and way too expensive. I’m not going to publish any numbers here, but I feel (and every customer does for that matter) that all the disk vendors are over pricing their software. I know (and you do too) it doesn’t cost this much to develop updates to 4th generation software. I mean your still selling Graphtrack for goodness sake. We’ve been using Graphtrack since the 7700 Classic days, and it hasn’t changed so much that we should be charged for the new version if we buy a new disk subsystem. It may be okay to charge a little more for Universal Replicator since it’s new, but you have to make up the expense it took to develop it after only a few sales at the current cost. Bottom line, no vendors software is worth what they charge for it!

Thanks for including me in one of your posts. I dig what HDS is doing with it’s technology, but you guys still have some work to do. Keep listening to customers and addressing their needs and you’ll be fine.

Take Care,

Snig

Pavel Mitskevich on 25 Jan 2006 at 2:58 am

Receipts are already used in backup and recovery procedures. Custom recovery procedure may include the following steps:

1. Stop all replication tasks.
2. Create copy of existing data.
3. Find standby database.
4. Find complete set of redo logs.
5. Identify time to which recovery if database shouldn’t recovered at the latest moment.
6. Identify objects to recovery if complete database shouldn’t be recovered
7. Start recovery and wait for some minutes/hours.
8. Ask application’s administrators to check integrity of information (and synchronized with other applications like Snig already wrote).
9. Ask developers to do something if result is not successful.

A lot of backup techniques maybe used: cold/hot backup of database, backup of database logs, database replication, snapshots and remote copies of volumes. In most cases several techniques should be applied recovering data and it’s the biggest issue in my opinion. Full stack of recovery may include volume, database and application levels, but maybe few manipulations at database level is the fastest way to restore information and operations…

A lot of receipts may be produced but not all of them will be optimal. Here is no universal receipt how to restore data because procedures to store and process information (not files or databases) are not unified. CDP like ILM is one of the most popular buzzwords but here is no real products to integrate with databases and mentioned backup/restore techniques.

Snig on 30 Jan 2006 at 10:45 am

I guess Hu doesn’t respond to comments?

Hu Yoshida on 31 Jan 2006 at 6:07 am

Snig,
Thanks for your comments and sorry for the delay. Been a little busy.

Yes, although, the journal is time stamped, for now we use the same methods as before to ensure proper sequencing of data blocks, time stamps for mainframe and sequencing algorithms for open systems.
You are also correct in that we do not include mainframe and open systems date in the same consistency group. We recognize that as a requirement for customers that have a mixed environment where applications span mainframe and open systems platforms and share data. We will strive to improve this in the future.
I agree with your third point. Having one replication license for the Tagmastore controller eliminates the need for additional licenses on externally attached storage which reduces maintenance as well as license costs.
Fourth point, software, especially software that is tightly coupled to hardware, is costly to develop and maintain across generations of hardware platforms. Software licensing is difficult to get right and we are constantly reviewing our pricing and maintenance fees, as technology and business models change. Bottom line, if the software does not provide an ROI, it won’t sell.
Thanks for taking the time to let us know your thoughts.

Robin on 27 Jun 2006 at 4:49 am

Thats a nice bit of work and its very Informative for all the readers of this blog. But one thing i have observed that many users are unaware of the methods of data recovery or application or even for the fact that they are also unaware of the methods and steps that are to be taken while such an event happens. Even i was unaware of simple steps that can be followed and the Disasters can be avoided. I had to Visit Disk Doctors Labs Inc to recover the data that was present in my Harddisk. But one thing to which i agree is that the service was good and the delivery was in time.

Jeremiah Owyang on 27 Jun 2006 at 4:56 am

Robin –by any chance do you work for the company you mentioned? If so, disclosure is always appropriate.

Jake on 09 May 2007 at 4:45 pm

At this site you can compare data recovery quotes, I haven’t seen any other sites that does that, was very valuable in my case.

http://www.datarecoverycompare.com

Hu Yoshida - Storage Virtualization Thought LeaderMust-read IT Blog

Hu Yoshida
Vice President and Chief Technology Officer

Connect with Us

     

Switch to our mobile site