Sometime soon, we’ll have representatives from all three DDR chipset makers to test.
Whom would you like to win?
I’ll tell you right now, from what I see, I could make any one of them beat any other. When it’s just a matter
of a couple percent, I’m sure I can come up with enough in my bag of tricks to overcome any slight edge one particular piece of equipment has over another.
When that’s the case, what good is a benchmark when you don’t know exactly what was done to a particular system?
As I pointed out on the front page, we’ve got a pretty huge discrepancy between the same test being run by two different websites. No doubt somebody did something
wrong or wacky, or a particular piece of equipment had a huge problem. We’ve already spoken about how increased cooperation can improve this situation.
Right now, though, I’m more concerned with smaller differences.
Is Everybody Corrupt?
It seems to be an endemic belief that some, most, or all places are corrupt or biased, or whatever bad word you prefer to use.
I don’t know if that’s true or not; I really doubt it’s as true as people think it is.
However, current, wildly nonstandardized methodologies leave everybody open to that charge, whether it’s true or not.
Fortunately, whether it’s apples and oranges, or money under the table, the solution is the same: full disclosure.
It’s probably too much to expect any substantial number of websites to agree to anything, but it’s not unreasonable to state all the conditions under which a piece of equipment is tested.
My Tweak May Not Be Your Tweak
I find it likely that running the exact setup across different platforms may prove to be unfair to one item being compared; a piece of equipment may do better with one set of tweaks, another piece may do better with another set.
Even two representatives of the same piece of equipment may require different tweaks for optimal performance, so what may work for a review site may not work for you, or vice versa.
Nonetheless, results would be more comparable if we knew precisely what was done to a particular system. That’s especially important now simply because the differences are so close.
Fewer, More Thorough Benchmarks
I think, for a while at least, when we do a motherboard or processor test, we’re going to test under a variety of tweaked conditions. That’s been done here and there, but it might be useful (especially
across different motherboard chipset platforms) to see what works and what doesn’t. We’ll probably start off with the tweaks anybody can do, and finish up with tweaks that require more effort and/or cash to use.
We’ll probably also test at various FSBs and see what that does, say at 133, 140 and 150.
We’ll probably limit benchmarks to just a handful: one office-based, one creative-based, one or two gaming, and see what works and doesn’t in those areas. No doubt we’ll find some tweaks that just don’t do some areas much good, which is fine. Knowing
what not to do is just as important as knowing just what to do.
The goal is to get more of an idea of how systems work under a variety of conditions, and what generally improves performance in a certain realm, and what doesn’t.
I’m not saying everybody should take just this approach, but it would be useful for websites to state just what was done to a system in a particular test. If nothing else, it would be very educational to newcomers. It would also give people more basis to compare different findings from different sites, and that can only be good.