Maybe gaming benchmarks needs their own calibration, too. – Ed
There’s a rather extensive compilation of how well a number of sub-$200 video cards perform under a variety of games and CPU processor speeds.
It’s the CPU scaling that’s the interesting part of the article. Whether the authors realized it or not, the software is being tested just as much as the hardware.
Different software engines have different performance curves, and are affected differently by configuration changes. Some get affected by CPU power, some don’t. It wasn’t tested here, but other evidence shows that memory bandwidth can play a role, too.
What I think this says is that maybe we need to rethink how game benchmarks are used in certain contexts.
For instance, if X benchmark is heavily video card dependent, that is probably a great benchmark for video card testing, but a lousy benchmark for CPU testing.
If X game is heavily dependent on CPU power, certainly use it in a CPU comparison, but not for general video card comparisons.
Finally, maybe we need another type of benchmarking, one that is good only onto itself. Instead of using eight pieces of software to test one or two pieces of hardware, use eight pieces of hardware to test one program.
Now I don’t know how monomaniacal folks are about a particular game, and whether it would be of true value to many, but if this article tells you anything, it tells you that each piece (or at least clan) of software has its own performance signature, and maybe it would be useful to map this out more.