United States
Site Map Contacts Hitachi Global Community
Hitachi Data Systems Hitachi - Inspire the Next

Hitachi Data Systems Blog

Home > Corporate > HDS Blogs > HDS Bloggers > The HDS Blog
Products, Solutions and more

Data Center Advisors

From ASIC to Microprocessor and Back Again

Other than being an allusion to J. R. R. Tolkien’s The Hobbit, there is real meaning in the title of this post, which I’ll get to towards the end. What I want to start with is a look back into the past and talk about, of all things, math co-processors.

Do you remember them? If you go back that far in personal computing land you should recall what an external FPU or math co-processor is. Here’s the Wikipedia definition for context, which I find personally very interesting for this post:

A floating-point unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating point numbers. Typical operations are addition, subtraction, multiplication, division, and square root. Some systems (particularly older, microcode-based architectures) can also perform Floating Point Unit (FPU)various transcendental functions such as exponential or trigonometric calculations, though in most modern processors these are done with software library routines. In most modern general purpose computer architectures, one or more FPUs are integrated with the CPU; however many embedded processors, especially older designs, do not have hardware support for floating-point operations. In the past, some systems have implemented floating point via a coprocessor rather than as an integrated unit; in the microcomputer era, this was generally a single integrated circuit, while in older systems it could be an entire circuit board or a cabinet. Not all computer architectures have a hardware FPU. In the absence of an FPU, many FPU functions can be emulated, which saves the added hardware cost of an FPU but is significantly slower. Emulation can be implemented on any of several levels: in the CPU as microcode, as an operating system function, or in user space code. (source: http://en.wikipedia.org/wiki/Math_coprocessor )

If you’ve noticed the bold and colored sentence in the selected text above, it points to the fact that most modern processors have replaced math co-processors with embedded Floating Point Units and software libraries. So what has happened is that a previous cottage industry, which provided ASICs functioning alongside a CPU, have disappeared.

However, that hasn’t stopped new technologies from cropping up in the area of numerical processing. A type that hasMicroprocessors become extraordinarily popular for graphics and vector processing of late are GPUs. For specific numerical and highly parallel tasks GPUs with standard x86 CPUs have arrived on the scene and become popular for increasing compute capability while decreasing physical system footprint. Generalizing a bit, what I see is the sedimentary hypothesis in action: separate HW function lives for a while, but eventually, when functioning as the microprocessor, libraries and compliers become good enough that the need for a separate HW goes away. Repeat cycle!

Now let’s take a look at what Intel has been doing with their microprocessor family around embedded applications such as storage. Specifically, if you read some of Intel’s product briefs on their microprocessors for embedded applications and you’re a storage vendor, you might think that hell has finally frozen over.

Intel has been implementing embedded application functionality into their Xeon processor line adding in a veritable alphabet soup of TLAs. Here are but a few of the capabilities:

  • Internal support for RAID 0, 1, 5, and 10
  • Integrated SAS and PCIe
  • Support for AES, Hashing, Chunking and Compression
  • Non-transparent bridging
  • Various virtualization assists

There’s also the assertion from Intel that software RAID stacks with Intel microprocessor assists are on par with ASICs that support RAID offload from a standard microprocessor.

My response: Okay, this is nothing more than the sedimentary hypothesis in action, and eventually Intel’s Xeon SoC for embedded systems will solve some, but not all, storage problems. Furthermore, new whitespace problems will emerge in the storage market, and guess what? Intel won’t have that capability on or near their processor for a while — just like we see with math co-processors being sucked into the micro process and GPUs in a Phoenix-like way, rising from the math co-processor ashes. So from ASIC to microprocessor and back again!

Any ideas for what the white space could be? Drop me a line or comment here if you have any suggestions. Otherwise, tune in soon to read some ideas in a future post.

Related Posts Plugin for WordPress, Blogger...

Comments (2 )

Bob Primmer on 27 Jan 2012 at 4:11 pm

good post, Michael. Hadn’t thought of the FPU angle before…

Michael Hay on 03 Feb 2012 at 7:44 pm

Thanks Bob, appreciate the comment.

Michael Hay

Data Center Advisors

Connect with Us


Most Popular