Principles of Overclocking

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

There’s an unfortunate article entitled Has AMD Castrated Overclocking? out there.

In one sense, it is unfortunate because the author doesn’t seem to know many of the core principles of overclocking.

It is more unfortunate because the underlying attitudes and beliefs behind the statements are so common these days. I’m not out to pick on the person because of what he said, but I’m using this as an example to show how to approach overclocking better.

Overclocking: A Thinking Man’s Game

A large (and I think growing) percentage of overclockers react to theory and thinking like a rabid dog reacts to water. They don’t want to hear it; they want to do it.

To them, trial and error is the only way to learn. Sorry, but that’s the only way dumb people learn.

Sometimes, trial-and-error is unavoidable (fine-tuning a particular machine is an example of this), but it is always better to learn from the experiences of others first pretty much for the same reason it is better to learn from the experiences of bridge-jumpers to learn about the effects of gravity.

There are two reasons why people of otherwise normal intelligence are dumb about this. There is ignorance (people just don’t know) and there is stupidity (people just don’t want to know; stupidity is just ignorance with attitude).

The reality is overclocking is governed by certain general principles. It’s much like gravity; it works on you whether you know about them or not.

The key to intelligent overclocking is to see how the general principles apply to a particular situation. This requires somebody doing some thinking about it to establish very general parameters, then somebody doing some testing to verify the thinking and see more precisely how the general principles apply.

But you can’t apply principles to a situation unless you know them. If you don’t know them, everything is a surprise, including many things that ought not to be.

Principles of Overclocking

Here are the basic core principles behind overclocking:

  • CPU manufacturers make their products using technologies that have certain inherent limitations governed by physical laws. When those limitations are reached, it is time for new technologies.
  • The maximum potential of a particular technology is rarely reached the first time it is tried. As time goes on, tweaks are made to the technology so that it can reach its maximum potential. However, no particular set of technologies can be tweaked indefinitely. Once a certain point is reached, further significant improvement becomes impossible, and further advancement requires new technologies.
  • To ensure a negligible percentage of product failures, CPU manufacturers aim towards making the vast majority of their CPUs capable of running under normal working conditions at speeds which are near, at, or above their highest rated CPU. They don’t always succeed in this, but that is their goal, which they usually make sooner or later.
  • CPU manufacturers generally rate most of their CPUs at speeds which are somewhat to considerably lower than the CPUs maximum potential speed. Generally, the lower the rated speed for a certain technology, the higher the potential overclock.
  • The level at which a CPU can operate can be modified to some degree by changes in its working or physical environment. Modest changes usually yield modest results. Major changes can yield bigger results, at the price of much bigger effort/cost.

    Taking What The Defense Gives You

    This may bruise some egos, but most typical overclocking gains are just a matter of taking what the defense gives you. In this case, both Intel and AMD generally sell at least some processors that can run significantly faster than the speed at which they are rated. That’s what the “defense” gives you, and that’s what overclockers exploit.

    Most normal overclocking gains (especially big ones) have nothing to do with anything you do to the CPU.

    Outside of extreme environmental changes (i.e., freezing), user changes almost always have a limited impact on performance. They aren’t often insignificant, but they are rarely major.

    Therefore, intelligent overclocking is 80% or more think-work. You read the intelligence reports and the results the scouts hand in, you pick your spot where the defense is weakest, and you go for it.

    That’s where you should put in most of your effort. That’s the part for which you deserve credit.

    If you don’t do that, your success or failure depends on whether anyone you listened to was part of the chain leading to somebody who did do the work, or just dumb luck or lack thereof (they call it that for a reason, you know).

    Let’s take these principles and apply them to the article and see what we learn.

    Applying Principles, Part I…

    Applying Principles, Part I

    The gentleman in the article has two core objections to his Athlon 64 experience:

  • He didn’t reach a high speed overclocking.
  • He didn’t get a high percentage overclock.

    (I’m not repeating myself; as you’ll see, there’s a subtle difference between the two.)

    Let’s look at each of these separately:

    He Didn’t Reach A High Speed Overclocking

    This gentleman believes that “AMD is not allowing much room to the ‘push for more’ world.” He believes that AMD is somehow limiting the ability of the processor to overclock.

    The gentleman apparently doesn’t know principle one, the part about technologies having certain inherent limitations governed by physical laws

    These are 130nm processors. As the Athlon XP showed, under default conditions, around 2.4GHz is the limit for this kind of technology. If you look at AMD’s roadmaps, they don’t expect more than 2.4GHz out of 130nm technology for FXs/A64s, either.

    If you ask, “But why does the PIV run at over 3GHz at 130nm?” you still don’t understand what AMD has been trying to tell you about the “Megahertz Myth” for at least the last year. The PIV is designed to do less work more often; a 2.4GHz from the PIV is worth less than 2.4GHz from a Hammer, much less.

    If AMD could make a 3GHz 130nm FX/A64, they would be making one (and kicking Intel’s ass all around the California freeways with it). The reason why they don’t is that they can’t. That’s not AMD’s fault; Intel couldn’t make a 3GHz processor with Hammer’s design using 130nm technology, either.

    Yes, AMD is doing a couple things to make the process of overclocking more difficult, but if you removed every single one of them, you’re still not going to hit 3GHz with one. The technology has limitations.

    Someday, AMD will make a 3GHz Hammer, but that will take a new technology for them (and us) to reach that speed: 90nm technology. Not until then.

    The Delusion of Percentages…

    I’m Surprised You’re Surprised

    The gentleman contrasted the overclocking performance of his A64 with that of his PIV:

    “Intel has surprisingly done quite well in this market with its P4 processors too. The Pentium 4 2.4c GHz processors overclocked VERY well. Consumers were able, without much effort, to push the 2.4c up to an amazingly 3.5GHz: a full 1.1GHz overclock, mind you. And it was introduced at a price under $200.00 USD, and came equipped with Hyper-Threading with an 800MHz Front Side Bus.”

    If the gentleman knew about the principles of overclocking, he wouldn’t have been surprised at all.

    It’s not surprising at all that the 2.4C is very overclockable, simply because principles two, three and four tell us it ought to be.

    It is the lowest speed-rated PIV of its kind, with the top speed-rated one being 33% faster than it. All 2.4Cs use a D-1 stepping, which is a later (and maybe last) version of the Northwood version of PIVs. D-1 stepping chips generally can hit 3.2GHz without even a voltage increase (or just a slight one).

    Even if you had never heard or seen anything about its performance, if you followed the principles of overclocking, you would have known that this was a prime overclocking candidate.

    However, comparing the degree of overclocking from the top speed-rated A64 to that of a 2.4GHz is comparing apples to oranges. If you were really concerned about the relative degree of overclocking, the comparison ought to be between the A64 and a 3.2GHz PIV. They’re both top speed-rated chips reaching the limits of mature technology. The principles of overclocking tell us that neither of these chips should overclock a whole lot, and guess what? They don’t.

    Where the FX/A64 is different than most earlier generations of processors in that AMD had to introduce those late-arriving processors much higher on the potential speed scale than the norm. A normal release pattern for FX/A64 would have been much like the Opteron ramp; they started with 1.4, 1.6 and 1.8, and are working their way up to 2.4.

    AMD couldn’t release the FX/A64 that way because 130nm Hammer are big and cost a lot to make, and AMD would have hardly gotten any money for a 1.4GHz FX/A64 given AthlonXP pricing. So they left the slow speeds for Opterons, and are only selling fast, expensive FXs/A64s.

    If Intel introduced the Northwood a year late starting at 2.8GHz, and no lower speed than that, you’d have the exact same situation for the exact same reason. It’s the abbreviated rampup starting from a relatively high position, and not selling lower-rated chips that makes the FX/A64 relatively “unoverclockable,” not any technical throttling by AMD.

    It’s Not Percentage, It’s Price

    It is a common error to be impressed by the percentage of an overclock. It may seem to be a good idea, but even when it seems to be right, it’s right for the wrong reason.

    What an overclocker needs to consider is the performance he gets for a given price.

    Let’s take a typical FX-51. It runs at 2.2GHz. Presume somebody takes one at gets it up to 2.4GHz. That’s about a 10% overclock.

    Let’s say that the day after he does this, AMD comes out with a 1.6GHz FX. It can reach 2.4GHz, too. OMG, that’s a 50% overclock!

    Which is the better chip for an overclocker?

    If you think it’s the second, you answered too quickly. Performance-wise, there’s no difference. They’re the same chip. There’s one thing you don’t know yet: the price of the two.

    If the 1.6GHz is cheaper, then it is better. If the 1.6 and 2.4 cost the same, there’s no reason to choose between the two. If the 1.6 happened to cost more, the 2.4C would be the better chip.

    It is price/performance that matters, not percentages.

    I grant you, almost all the time, the lower-rated chip will overclock more and cost less, but it is the lower cost that makes it a better chip, not the percentage of overclocking.

    Here’s another example:

    Let’s say I’m out to buy a new computer. I’m given a choice between an FX-51 2.2GHz system which will only overclock to 2.4GHz, or I can get a PIV 2.4C system which can get up to 3.4GHz. Let’s pretend they cost the same.

    Which do I buy?

    I buy the FX-51 system in a flash. I do not care less that it can overclock only 10% as opposed to PIV’s 40%, simply because the FX-51 at 110% outperforms the PIV at 140%. I’m out for price and performance, not percentages.

    Percentage Does Not Equal Personal Achievement…

    Percentage Does Not Equal Personal Achievement

    You get the sense in that article that the author equates a big overclock with a big personal achievement:

    “They’re just not capable of giving us something to feel a sense of achievement about, at least for those like myself.”

    Let’s go back to that hypothetical 1.6GHz FX. As we said, it’s the same processor as the 2.2GHz; AMD just gave it a lower multiplier.

    I suggest that taking a 1.6GHz FX and taking it to 2.4GHz is no more of a personal achievement than taking a 2.2GHz and taking it to 2.4. If anyone should claim credit to the difference in the overclocking percentage, it ought to be AMD and Intel for selling you such a thing.

    The only credit you can really claim under those circumstances is having enough brains to figure out ahead of time that you could get the same results from (in the real world) a much cheaper chip, and that the credit is due to your (or somebody else’s) thinking, not doing?

    Let’s use the real life 2.4C PIV. If Bozo the Clown can get one of those to run at 3.2GHz by just changing one or two BIOS settings, getting 3.3 or 3.4 is hardly a stupendous achievement. It’s like being airlifted to the 28,000 foot level, then being to climb Mt. Everest. It’s not quite the same as climbing from the bottom.

    Taking what the defense gives you is smart, not grand. If you want to be a great overclocker, you need to take what the defense won’t give you. That’s where the personal achievement from doing rather than thinking comes in. Get an FX, any speed up to 2.8GHz, and now, maybe, we can talk about achievement.


    Thanksgiving will come to the U.S. in a few days. There will be those who will slave over ovens and make everything from scratch. There will be others who’ll just buy a cooked meal from the store and heat it up.

    Which group has the right to claim more credit for personal achievement?

    People who buy ready-made extreme cooling systems like Prometeias are sort of like people who buy their Thanksgiving meal from the store. If all they do is put the thing together and fire it up, it hardly involves the same level of effort as putting together a system from scratch.

    On the other hand, both the meal and Prometeia buyer end up with a professionally-done product. If a lousy cook spends three days cooking Thanksgiving dinner, it’s still a lousy meal. Maybe an “A” for effort, but you can’t eat effort (or get more FPS from it).

    The point of this is not to say which is better. It’s more a matter of not taking personal credit when it really isn’t due. There’s nothing wrong with buying a great meal, just don’t take the credit for cooking it.

    One also ought to remember that the real point of Thanksgiving dinner is not how you got there, but what you end up with.

    The Real Point of Overclocking

    There are two types of overclockers. There are those (a relative handful) who are out to get the biggest bang, just to push the envelope. They put in a lot of time, effort and money to do just that.

    Then there are those (the rest) who are out to get the biggest bang for whatever time/money/effort they decide to expend on that pursuit.

    There’s nothing wrong with either approach. Both are legitimate hobbies, and no one should look down on another who has goals more modest than theirs. Except . . . .

    . . . when you have those in category two who want to make themselves look like they’re in category one.

    It’s like mountain climbing. There are those who climb Mount Everest by themselves. There are those who spend weekends rock-climbing. Both are enjoyable hobbies, but it is pretentious to climb up a bunch of rocks (or drive up a mountain that has a road to the top) and act like you’re one of the Everest climbers.

    Enjoy your hobby for what it is, and remember, a computer is a tool, a means to an end. The end result is not making a computer, but using it.

  • Discussion

    Leave a Reply

    Your email address will not be published.