Ivy Bridge and Haswell

Intel announces 22nm processors, but it seems like their answer to “too much” is more.

We have a few more Intel code names to toss into the pile.  Their 22nm processors are supposed to be called Ivy Bridge and Haswell.  

That’s nice, but outside of broadening SSE to 256-bit, a few other tweaks and a little more cache, all you say about these chips is More Cores.  

Great.  We can’t use four cores now, so let’s add more!!

Seriously, why does Grandma need six or eight cores?  Why does Granddaughter need six or eight cores?  I hear some of the justifications being spewed out for this, and I try to see a corporate world where word processing is out, video processing is in.  And if I try really, really hard for a while, I can actually get some blurry glimpses of something like that, a dozen years from now.  Not now, not soon. 

For one thing, you SETI people better stop looking and start finding, because we’re going to need some space aliens soon to whom/what we can outsource the multicore programming, since this is apparently something beyond mere humans.   

You think I kid?  Well, here’s a few quotes from this recent Fortune Magazine article:  

“But programming in parallel is simply too complex for the average code writer, who has been trained in a very linear fashion.   In conceptual terms, traditional coding could be compared to a woman being pregnant for nine months and producing a baby. Parallel programming might take nine women, have each of them be pregnant for a month, and somehow produce a baby.”

” ‘If I were the computer industry, I would be panicked, because it’s not obvious what the solution is going to look like and whether we will get there in time for these new machines,’ says Kunle Olukotun, a computer science professor who is attacking the multicore challenge at Stanford’s new Pervasive Parallelism Lab. ‘It’s a crisis, and I wonder whether what we are doing and what is happening within the industry is too little, too late. . . .’ ”

The use of chips with four cores in the past year has meant that such a PC today is no faster for many key tasks than, say, a comparable computer purchased three years before. Worse, with chips of six or more cores on the way, your favorite applications could actually run more slowly.

Now I have no idea why the writer thinks six or eight cores would be slower than one or two, but we have plenty of evidence today that unused cores don’t help. 

 

Intel’s goal seems to be that software writers are supposed to do whatever it takes to write their code so that it will automatically split up tasks among how ever many processors Intel feels like giving them any given process cycle.  That’s very convenient for Intel, but it makes little to no sense for most programs and programmers.  Many, probably most computing tasks just can’t be split up effectively, or require so little computing power that splitting the task among many processors would be like hiring four or six or eight people to carry a bucket of water.  

Other tasks face different bottlenecks.  There’s no point having lots of CPU cores do gaming work if one video card can’t handle the extra pixels, and I’m sorry, but the real-world answer to that is not “buy four video cards.” 

What would probably make more (though still not much) sense is to have an OS that could intelligently give programs and/or major subfunctions their own CPU rather than make them share. 

The real difficulty, though, is the simple reality that most people hardly use one CPU, much less clusters of them.  I think of one Sixpack friend.  He browses, he reads and writes email, he’ll look at a video someone sends him.  Why would he need or ever need four or six or eight processors? 

I’ll say the same thing another way, make it a rule.  Let’s call it the 60/10 rule.  If you do any single action on your computer regularly that takes more than sixty seconds of computing time, or you engage in repetitive actions that take more than ten seconds of computing time each, then maybe whatever it is you’re doing is a good candidate for multicore action.  Otherwise, it and you are not.    

Now if you meet that test, well, bless you.  Go forth and multiply your processors.  But if you or someone you know doesn’t, why waste the money? 

Even if you meet that test, just how many “yous” are out there.  For sure, there’s some.  But 60/10 does separate the men from the boys.  How many are men, how many are boys?  I would bet the men are greatly outnumbered by the boys.    

This test isn’t just for you, either.  It works just as well for the software writers.  If rewriting all the code means dramatic improvement in performance and people are willing to pay for that, sure, they’ll rewrite the code.  But if it doesn’t and/or the average customer won’t pay for it, why bother? 

This is Intel’s long-term problem with its “regular” CPU line.  They’re building solutions that are in search of a problem for most people.  We’re not talking overkill; we’re talking about shooting missiles at sand dunes. 

Why should you care, especially if you’re one of the “men” in the bunch?  I think Intel is going to find that, in this IDF and those that follow, that they’re going to be increasingly pressed to answer not “what” but “why” questions, as in “Why does the average person need this?”  And “video encoding” or “gaming” is not going to cut it as answers. 

And I think Intel deep down inside knows that they have no answer to that.  I also think Intel is hoping very, very hard some answers pop out of the blue, but that’s all it is, hope.  Given their actions, I think what they expect is that the “regular” CPU market will shrink into some superset of the current server market.  People who can actually use the power and are willing to pay for it will do so, the rest of us will get (and pay and overclock) less.

By the time we get to Ivy Bridge and Haswells, what was once the regular Intel line will look like Xeons and more importantly, will be priced like them, too.   

Ed

About Ed Stroligo 95 Articles
Ed Stroligo was one of the founders of Overclockers.com in 1998. He wrote hundreds of editorials analyzing the tech industry and computer hardware. After 10+ years of contributing, Ed retired from writing in 2009.

Be the first to comment

Leave a Reply