Workload distribution brings more value to USP-V(M) customers
by Michael Hay on Sep 24, 2010
I have written many posts on scaling and the unique value we have brought to the market, as has my colleague Hu. Many of my discussions have centered on our application of differentiation in a variety of disciplines including unique silicon, value-added microcode, and dense packaging.
Hitachi has and will continue to invest in all areas that generate differentiation resulting in value delivered to our customers. We will never limit ourselves to supposed trends in the industry such as: everything is to be done in microcode or software just because facsimile companies with ginormous marketing machines say it must be so. (Actually, with verticalization appearing in many segments of the technology sector, companies are making investments in silicon, software, user experiences, packaging, batteries and power, etc. They aren’t limiting themselves to merely being one trick ponies when they delight their customers by solving challenging problems. My most recent post which is a response to George Crump begins the debate on this topic.)
Adding to my discussions, this post focuses on one lesson we’ve learned in the USP-V(M) class of products. As we’ve evolved our architecture, we have found that regardless of the allocation of physical resources in a system, if resources aren’t properly pooled or load distribution mechanisms aren’t employed, an imbalanced system is highly likely. . When this occurs, resources in one part of the system can be driven to 100% utilization while others potentially remain idle.
In the USP-V(M), there are indeed many resources distributed throughout the system, but to avoid an imbalanced system, we use workload distribution mechanisms to spread the load throughout the system. Practically speaking, this implies that if there is a processor that is super busy and more workload comes to the busy resource, the USP-V(M) can distribute the workload to other less busy resources within the system. Application of this kind of load balancing technique has caused many improvements including, but not limited to, storage virtualization performance improvements over the USP/NSC.
Workload distribution works by allowing heavily loaded microprocessors to hand off some tasks pertaining to data, metadata, software (ShadowImage, TrueCopy, UniversalReplicator), and Hitachi Dynamic Provisioning to other less busy microprocessors. Distributed processing on an individual microprocessor begins when it reaches a sustained 50% average busy rate. At that point, using a round-robin algorithm and taking into account target processor busy rates, another processor on the same board is selected to assist the requesting processor. If a microprocessor is already participating in another microprocessor’s load and receives its own host I/O command, the new workload could be distributed among the other processors on the board.
Microprocessors that currently have no host I/O of their own can become up to 100% busy on distributed loads in support of other microprocessors. However, in the presence of host I/O on a port associated with a microprocessor, the sum of that processor’s host I/O and distributed I/O must be less than 50 percent. When host I/O on a microprocessor exceeds the 50% busy rate, it is no longer a candidate for accepting new tasks from “needy” processors. As a final point of clarification, this distribution mechanism does not represent load balancing among host ports, but instead a degree of parallel processing for I/O commands to the back-end based on available processor cycles.
So workload distribution was one tactic we’ve employed to counteract the potential for an imbalanced system, but what’s next?
Finally, I would like to thank Mr. Alan Benway for the assist in ensuring that I got the right information on this valuable capability of the USP-V(M).
[...] Workload distribution brings more value to USP-V(M) customers(Sep 24, 2010) [...]