Cores, baby, cores! A new CPU race is on but serious questions on apps raise bottleneck issues.
When Intel hit the wall with Preshot, the engineers shifted gears to multi-core CPUs as the new holy grail of the CPU world. What a great way to overcome the heat issue – distribute cycles among cores!
However it looks like the engineers are getting ahead of the real world – the issue bubbling up is the complexity on developing multi-core apps. In fact, so critical is the need for apps to drive multi-core demand that Microsoft and Intel are donating $20 million to colleges in an effort to jump-start parallel processing research. One wonders if the hardware is rapidly outpacing software.
It’s one thing to develop apps for 4 or 8 cores – but the view from Forrester Research:
“Expect x86 servers with as many as 64 processor cores in 2009 and desktops with that many by 2012.”
According to one company in the software biz,
“As you would expect, the high-end developers are familiar with threading. After that, it drops off pretty quickly.”
Not that this will stop Intel from upping the core race – as stated by James Reinders, one of Intel’s chief software guys:
“While we’re still wrestling with how do I use two, four, eight cores, we’re going to throw into the mix a processor with dozens of cores…[code name: Larrabee]“
While you might think that breaking a task into parts is not all that difficult, it is if one part of the program needs input from another – then you get into scheduling issues that complicate things very quickly. Load on top of that potential memory and cache bottlenecks and the problems presented by multi-core CPUs start to look pretty interesting. Fold into this whole scene the need for a LOT of parallel app software smarts and you begin to see some real speed-bumps coming up pretty quickly.
For those wishing to delve into the benchmarking side of things, this paper is an eye-opener:
“…the initial results that we’re seeing in the lab are clearly demonstrating that no matter how ‘beefy’ the multicore processor, there is always some sort of bottleneck. The practical side of this is that the system developer will also likely encounter these bottlenecks.”
This is not an uncommon finding (“Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration”) –
“A Sandia team simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.”
Now maybe I’m a bit myopic on this topic, but I do begin to wonder if the CPU guys are tilting at digital windmills. There is no doubt that the multi-core technical achievements are really stunning, but in a consumer environment characterized not so much by CPU horsepower but increasingly by “just-in-time” mobile computing, I begin to wonder if going far and fast on 64 core CPUs is going to be the hit Intel/AMD might think.
In a world increasingly defined more by internet access and less by CPU cycles, it just might be that Intel/AMD and Microsoft will find that the road more traveled will be, paradoxically, for less power and eye-cancy than more. This paradigm-shifting stuff is anathema to established technologies but damn interesting to consumers. It could be a terrific opportunity for companies that don’t have vested interests in “the way it is”.