• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

I got a few questions on the Celeron D

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

PDL

Member
Joined
Apr 21, 2001
Location
OH-Heartland of the USA
Hey all, just curious about a few things.

I see a lot of folks OCing the P4s and the calculations are that they are getting from 20% to 30% in most cases. Now, that may not even be a reliable way to compare overclocks since the end performance is what we are all looking for.
Mine is at about 50% overclock but I'm not sure that is all I'm imagining it to be. Forgive me as I have been an AMDriod for years.
Q1: What is the multiplier on a P4 3.2? My Celeron D is at 18x and I have to guess that the P4s are much lower considering the higher FSB default setting, compared to mine at 533MHz. Is this similar to the old T-Bird 'B' and 'C' cores?
Q2: Why aren't we seeing OCs in the neighborhood of 50%+ on the P4 chips? Are there other components that cannot handle the overall speed? Or is it just the design of the CPU that the upper limit is reached at a lower percentage.
And Q3: Beside the lower bandwidth on my Cereron D due to the lower FSB, what other areas am I loosing out on when compared to the P4 units. Ah yes, I also realize the performance loss due to the lower cache. What else?
Seems odd to me that I can crank this up to 3.7GHz and, while it is fast, it is not what I would call 'twice' as fast as my TBred running at 2 GHz and 185 FSB.

Thanks for the education all! :D
 
PDL said:
Q1: What is the multiplier on a P4 3.2? My Celeron D is at 18x and I have to guess that the P4s are much lower considering the higher FSB default setting, compared to mine at 533MHz. Is this similar to the old T-Bird 'B' and 'C' cores?
There's no point to guessing. Try dividing the cpu speed by the fsb. Math is your friend.


PDL said:
Q2: Why aren't we seeing OCs in the neighborhood of 50%+ on the P4 chips? Are there other components that cannot handle the overall speed? Or is it just the design of the CPU that the upper limit is reached at a lower percentage.
Because Intel hasn't seen fit to sell a 1.8e or the like. In the days of the C1 1.8a cpu cores could run over 3GHz, making for big OCs. Honestly, when Intel's top speed grades are in the 3.4-3.6GHz range, how much of an OC would you expect a 3.2 to deliver?


PDL said:
And Q3: Beside the lower bandwidth on my Cereron D due to the lower FSB, what other areas am I loosing out on when compared to the P4 units. Ah yes, I also realize the performance loss due to the lower cache. What else?
Seems odd to me that I can crank this up to 3.7GHz and, while it is fast, it is not what I would call 'twice' as fast as my TBred running at 2 GHz and 185 FSB.

Thanks for the education all! :D
You don't loose this part or that function, you loose the overall cpu effectiveness that results from the loss of this part or that one. Prescott core cpus depend on a large cache to minimize the impact of their internal architecture, and as such are going to loose a bunch of speed as you reduce the L2 from a just-adequate 1MB to 256K.

Exactly how much you lose depends on the exact measure of performance. Some application are positively destroyed by the lack of cache, where others are largely unnaffected. But when pondering why the system is unimpressive in action despite all that clock speed, one would do well to consider the possiblilty that the lack of cache does indeed have serious consequences.
 
Last edited:
OK, thanks larva!

I figured out the math about an hour after I posted this thread. I'm not as sharp as I used to be I guess!

Anyway, I did not realize the impact the cache played on the Prescott core. one meg is 'just adequate' huh? That must be the 'missing' performance that I would have expected.
 
Let me qualify these comments with the following: I am not a cpu designer. I have designed cpus in simulation but that's a long way from the expertise embodied by a modern PC processor. I can make fairly educated guesses that correlate well with measured application performance.

Having said that, it is my opinion that 1MB on a Prescott is "adequate" in the same way 512KB is adequate for a Northwood. Northwoods obviously improve with more cache, as the effectiveness of the L3 addition of the EE proves. And I feel that in similar fashion Prescotts would improve with a cache larger than 1MB.

The pipeline is very long in a Prescott, making errors in branch prediction costly. This is also true of Northwood, but to a lesser extent. This (slight) difference in architecture accounts for the increased sensitivity Prescott shows towards cache size.
 
Back