Effectiveness, Not Efficiency

More a few people are running all kinds of benchmarks to try to measure the “efficiency” of a Prescott as opposed to a Northwood and whether Prescotts will scale in performance better than Northwoods.

I think these people are barking up the wrong tree, at least for overclockers.

Overclockers really couldn’t care less about “efficiency.” If they did, everyone would buy a Mac; PowerPC processors deliver more IPC than AMD x86 designs and a lot more than the PIV design.

The reason why we don’t all own Macs is because what we really care about effectiveness: How much can I get out of this processor, and how much more can I get out of it than from any alternatives.

I really don’t care if it takes 4 or 5 or 6 or 10GHz to achieve my goal, so long as I can run the processor at those speeds.

The normal reason for extending a CPU pipeline is to do less work faster, and end up with a net plus. Historically, processors that do less work faster have outperformed processors that do more work less often, simply because designs have gained more from being cranked way up than they lose by doing less per clock cycle.

The PIV is a classic example of this. It does about 20% less work per clock cycle than a PIII/Athlon, but can be cranked up about 40% more than the Athlon design.

If Intel came up with a CPU tomorrow that ran at 10GHz but was only 60% as efficient as Northwood, you buy it because 60% of 10GHz = 6GHz, and I know I’m not going to get anywhere near 6GHz from a Northwood.

Prescott isn’t bad for an overclocker because it does less work per clock cycle than Northwood. It’s designed to do that, and in the past, that has been a good design move.

This only matters to a nonoverclocker choosing between a Prescott or Northwood running at the same speed, but even there, what good does it do you buying now to know that someday, over-the-rainbow, a 4GHz Prescott might be as efficient as a processor that won’t even exist?

Prescott is bad for an overclocker because you can’t crank it up to anywhere near the degree you might expect from both a lengthened pipeline AND a process shrink. Even if it were every bit as efficient as a Northwood, it still would be a bad choice because it overclocks terribly given all that’s been done to it. Right now, it only overclocks few percentage points faster than a Northwood, and not much better than that when you put it in the deep freeze. It ought to do at least 30% better.

Unlike the PIV, there are some pretty strong indications that Intel extended the pipeline not for the usual reason of speed, but simply to handle the additional heat generated by Prescott (in all likelihood caused by Intel’s switch in dielectrics and/or the use of strained silicon).

This is very important to understand, Prescott doesn’t act like all the previous generations of Intel processors when it comes to heat and power. By those standards, it’s not behaving normally. Perhaps this is Intel’s fault in their electrochemical choices, perhaps the old rules just don’t apply anymore once you get past a certain speed, most likely, it’s both.

In contrast, from rumors I’ve heard, AMD’s 90nm processors aren’t having this problem; the power curves are more “normal.”

So measuring “efficiency” is a case of geek inconsequentialism for an overclocker (and you have to wonder whether or not those talking about it had a little blue Intel birdie whisper in their ears to divert them from the real problems).

We just want to know the bottom line, not the how, but the how much.

Ed

Be the first to comment

Leave a Reply