There’s some complaints here and there that a few manufacturers are doing their own overclocking of motherboards.
Whether intentionally or not, FSBs have always tended to vary a tiny bit, but by less than 1%.
Now we’re getting reports that FSBs are being hiked by as much as 3%.
The Right Approach
This fazes me not one bit. If I here about a mobo running, at say 137MHz rather than 133MHz, I just make a small adjustment to the benchmark.
I know from experience that a rough rule of thumb that a 10% increase in CPU speed will yield you about a 6% increase in real performance in real apps, or 60% of the CPU increase. So if the CPU is running 3% faster, 60% of 3% is 1.8%.
You may want to say that FSB is being increased, too. Based on this study on the effect of increasing FSB on PIV, the scaling of performance due to increased FSB looks to be roughly 10%.
Take 60% + 10% and you get a 70% adjustment for an increase in CPU and FSB speed for PIVs. 70% of 3% is 2.1%.
So if I felt compelled to compare apples to apples, I’d just lower the overclocked benchmark down 2%.
I suppose you could quibble over the calculations, but the answer really isn’t going to change too much no matter what you do. It can’t be more than 3%; you can’t get more results than the amount of increased work. It can’t be less than 0%; when something runs faster, it doesn’t go slower. If you want to say the adjustment is 80% rather than 70%, OK, the adjustment is 2.4% rather than 2.1%. Whoop-de-doo.
The Literal Approach
To me, at least, this is no big deal. For more than a few, though, it apparently respresents an impossible mental challenge, and the response I often see when this comes up is “the benchmark is invalid.”
I saw that comment someplace over a 1% difference in FSB.
To me, this is mentally challenged, and I’m being euphemistic.
It seems that for these people, unless you have an exact number for an exact condition, you have nothing. Some even think this is “scientific.”
This is just silly and displays no comprehension of the phenomenon, or even of science (or that benchmark scores themselves can vary).
For the type of benchmarking going on, and so long as a bottleneck is not reached, what you have going on is linear scaling. Once you establish some data points, you get an idea of the slope of that scaling, and so long as you don’t hit a bottleneck, you can pretty accurately predict what a change in a variable (like CPU speed or FSB) is going to get you.
Let’s say we’re wondering about how much better a 3.4GHz 200MHz PIV will perform compared to a 3.0GHz. Using the principles described here, a little math tells you you’ll see about an 8% overall improvment. For some activities, it will be less, for some, more.
That estimate might be off a little (actually, it probably leans to the low side). You could see 9%. You might see 7%. It could vary a bit more than that, especially for specific activities (different activities have different performance slopes).
But it certainly isn’t going to be 30% faster, and it certainly isn’t going to be slower than a 3.0GHz.
This is hardly genius on my part; to me, it’s just some observing and then making fifth-grade math work for me.
But I get emails from people who tell me that I should not (actually, can not) make these kinds of estimates until the product is out, a test is run, and the magic number pops out.
To me, that is intelligence-impaired.
When you make these kinds of estimates, you realize that this kind of information doesn’t collapse, it slowly degrades. The bigger the jump you make, the more it degrades, and the more likely it is that something will come up that will change the slope.
Sure, test when and if you can. But when you can’t, don’t just throw up your hands and say it’s unknowable until it’s tested. Just add a bit more information like scaling to it, and estimate based on that.
“It’s Not Scientific”
When was the last time you saw an astronomer attach one end of a tape measure to Earth and hang on to the other end while taking a ride to the Andromeda Galazy?
When astronomers measure distant objects, they can only very rarely do actual measuring (like bounce a radio wave off an object) and test. At best, they make mathematical calculations based on (fairly accurate) estimates, and for more distant objects, they make estimates based on estimates (and some of the assumptions in those estimates are pretty dicey). More on that here.
The numbers aren’t exact, but they’re good enough (not like we’re going to visit anytime soon) to tell us roughly where a place is, or it’s just the best we can do until something or somebody better comes along.
So science has no problem with estimating when it’s better than nothing.
Be the first to comment