New Video Cards, Old Rules

We’ll probably see reviews of the latest ATI cards tomorrow.

That will be the beginning, not the end of the next battle between ATI and nVidia.

The real battle, at least for this audience, won’t occur between the high-end champions, but in the trenches between the mainstream cards. It will be fought with software and hacked BIOSes, and when the going gets tough, the tough will reach for their (soldering) guns.

For overclockers, it’s never a matter of what the manufacturer will hand you, but rather what you can take, and the winner of that battle by no means has to come from the same company that wins the Battle of the High-End Defaults.

From indirect indications, it’s going to be close between the two top-enders, so there won’t be a clear-cut favorite in the overclocker wars.

Three Elements

There are three main hardware elements that drive video card performance. They are:

The GPU Being the main engine of the video card, it’s hardly surprising that overclocking it improves performance. Oddly enough, though, overclocking the GPU is usually the least of your worries when overclocking a video card. It is rarely the main bottleneck in an overclock, and represent the easiest opportunity.

This is pretty much for the same reason why CPUs are fairly to very overclockable. Manufacturers shoot for a high standard so they can get at least enough chips for their high-end products. If they do a good job of that, most of their chips end up being as capable or almost as capable as their high-end chips.

So if a video card maker is making 450MHz GPUs for the high-end, and running the GPU on a lower-priced card at 300MHz, odds are that 300MHz chip can be overclocked most of the way to 450MHz.

However, just like a car is not just an engine, a video card is not just its GPU. A powerful engine is not enough to have a fast car, it needs fuel to make it work, and a good transmission to get the power to the wheels. If you don’t have that, there’s a bottleneck.

For a video card, the fuel is:

Memory

The speed of memory chips is often overlooked when evaluating a video card. This is unfortunate because this usually ends up being the killer bottleneck.

Many also unfortunately think that they can overclock memory to the same degree they can overclock a GPU. Sorry, but it isn’t so. While memory can be overclocked a bit, it is rare to get and unrealistic to expect an overclock of more than about 15%.

If you’re serious about this, you always need to verify what kind of memory chips you’re getting with your particular model video card. Sometimes, video card makers will slip in memory chips that are faster than the norm. That’s good. Sometimes rather more often, video card makers will slip in memory chips that are slower, sometimes much slower, than the norm, and sell it for a bit less.

You can have the engine, you can have the fuel, but it doesn’t do as much good as it could if you can’t get the power to the wheels. This brings us to the third major performance factor

Pipelines

In the last year or so, video card manufacturers have taken a different route to cut the performance of their lower-cost video cards. They cut the number of pipelines that feed the beast, or cut the width of the memory path in half.

We have discovered in the past year that there are cuts, and there are cuts. Some can’t be reversed at all. Some can be reversed only with a soldering gun, while others just need a different BIOS “soldered” onto the card.

Nor is uncutting cuts, literally or virtually, necessarily successful (though at least a good chunk of the “failure” rate from such operations appears to be due to inadequate research by some into the particular cards being modified).

The next generation of video cards may provide yet another possible bottleneck:

Power Plugs

Why would power plugs become a bottleneck for video cards? The answer is quite simple. If you truly need two power plugs to reach the level of performance achieved by the top-end card, and the model you have only provides one, you aren’t going to make it.

For instance, if this graph is the real deal, there is little fundamental difference between the GeForce 6800 Ultra and the 6800GT. The GT just runs a bit slower, and under normal circumstances, an overclocker could reasonable expect to match the Ultra with a GT.

However, if you really need two plugs to achieve Ultra performance (and it’s hard to imagine nVidia putting in the requirement for an extra additional power line if they could have at all avoided it), that’s as good a crippler as cutting the pipelines.

On the other hand, it’s understandable to be skeptical about the Ultra really needing two power lines when the GT runs at 90% the speed of Card B with just one. It’s also possible that some mods could make up the difference, too.

All we’re saying is that this is a potential problem.

The Biggest Challenge of Them All

Finally, we have the biggest overall bottleneck of all when it comes to doing this. It’s not the equipment.

It’s the people (well, some of them).

When Trial Becomes An Error…

When Trial Becomes An Error

As you can see, there are a lot of variables here, and I don’t know (or for that matter care) who the winner in this neck of the woods will end up being.

It does appear, though, that this generation of video cards will offer rather more over the previous generation that we’ve seen for a couple years.

For sure, though, which and what ends up best will end up best due to the factors described above.

There will be some experimentation necessary before we start getting an idea, but let me say a few things about that.

Overclocking is more than an activity, it’s an approach. Just like most football games are really won by a better game plan, most of successful overclocking occurs before, not after, purchase. What happens afterwards in both cases is just executing the game plan.

Some people think that trial-and-error is the best or even only way to learn. That’s correct in the same sense that if your teacher decided to smash you in the face with a 2X4 if you made a mistake in class, you’d probably never forget it. Unless you’re really thick, there are better ways.

Trial-and-error is like suicide bombing. You only do it when you have nothing better handy, when there is no other way to achieve your goal. It’s hardly a good choice when you have a choice.

If you’re one of the first to get something new, that’s one thing (and even there, doing all the scouting you can helps). But if you’re not, and you just buy without looking into what you’re getting into, you’re not a pioneer; you’re just lazy.

I see this time and time again in forums. I read a thread, and it’s obvious early on that something isn’t going to work, but long after that point, people keeping buying and trying and failing.

That’s not trial-and-error, that’s foolishness.

Yes, research can be time-consuming and boring, but so? Sending products back isn’t? Wasting hours trying to do something that others have already found can’t be done isn’t?

Anything worth doing is going to take effort, sometimes a lot, and the best way to do something is to be prepared for it.

In the years I’ve been doing this, the number one reason why people screw up is laziness. They don’t want to do the homework, they don’t want to take the time to understand what they’re getting themselves into, they want to do before they’re ready to do.

Overclocking is always to some degree a leap into the unknown, but there’s a big difference between something being unknown to anyone, and something being unknown to you, between “No one has ever done this before,” and “I haven’t done this before.”

There’s enough unavoidable unknowns in overclocking without adding the avoidable ones to the list.

Ed

Be the first to comment

Leave a Reply