KT266A

OK, Via lent out some reference boards with the KT266A.

The results are disturbing.

The only thing the various sites all agreed on is that memory performance is significantly better with the KT266A than with other DDR platforms (though by just how much differed more than a bit).

As we’ve said again and again and again, memory performance
is a bad benchmark since improvements in it have nowhere near the proportionate effect on overall system performance. Modern CPU architecture do just about anything possible to avoid having to go to main memory.
The current PIV got away from that notion, and Intel is remedying that mistake in Northwood.

To illustrate this, let’s take the average of the increase in memory performance between the KT266A and SiS735 reported by these three websites: Anandtech, HardOCP, and XBit Labs and compare them to the average increase in performance using some commonly used application/game-based benchmarks.

These three sites used almost identical hardware to test: a 1.4Ghz TBird at 133MHz FSB, 256Mb of CAS2 DDR, a high-end IBM/WD hard drive, a GF3 card, and Win2K SP2.

MemVsApp

You can easily see that a big improvement in memory scores doesn’t translate to anywhere near the same degree of improvement in real programs.

What you don’t see in those averages is how much the actual scores differed from each other. That’s what the rest of this article is about.

If you look at the other benchmark numbers, you really can’t draw any firm conclusions other than the KT266A overall tends to do a little bit better than the current prime contender, the SiS 735.
The reason for that is that the numbers just aren’t consistent. Take those three reviews, and there is not one case where all parties came up with similiar numbers.

Raw scores vary too much

Let’s look at the raw scores run by three websites (all numbers from articles written by Anandtech, HardOCP, and XBit Labs).

Again, this is using practically identical equipment.

Business Winstone

Winstone

There’s a 13% difference between high and low score, 4% between Anandtech and HardOCP. In a time when
places are heralding 2% differences as a big deal, that’s more than significant.

Content Creation

Content Creation

Almost a 10% gap here, about 5% between HardOCP and Anandtech, with HardOCP showing the higher score.

SysMark 2001 Overall

SysMark2001

Another 10% gap, about 2.5% between HardOCP and Anandtech.

You might say this simply represents differences in installation, and to some degree no doubt
you’re right. Seems to make a hell of a lot of difference, though, doesn’t it?

If that were the case, though, one would then expect to see some consistency in results when
looking at different items. While that looks to be true for XBitLabs (their scores are consistency lower than
the other two), it’s not true when looking at scores from the other two.

Let’s see the differences in scores between HardOCP and Anandtech when looking at the ECS K7S5A board:

735Diff

You keep seeing this. If one place always had higher numbers than the other, then you could point with some confidence to some difference in installation. They don’t.

So Do The Comparisons

Let’s look at the differences found by the three websites when comparing the KT266A to the SiS 735. As noted above,
Anandtech and HardOCP are using the same motherboard, the ECS K7S5A. XBit Labs is using an ECS reference board, which is a little
different, but as you’ll see, they usually get one of the other two places getting the same results.

Compare1

Compare2

Compare3

As you can see, there’s always an odd man out, and it’s a different one every time.

Why?

There’s probably a lot of possible reasons interacting with each other.

I’ve already mentioned differences in installation. In all likelihood, XBit Labs is probably not doing something
the others are.

However, if it turns out that one style of installation helps one benchmark and another style benefits another, this rather (unintentionally) biases matters, now doesn’t it?

Benchmarks can certainly have a margin of error.

Given the near identical equipment, you have to start wondering whether or not different hardware items might be accounting for some of the difference.

To complicate matters further, these factors may not exist in isolation. By that I mean particular item A and particular item B don’t cause a difference, but the two together do.

Whatever it is, or they are, the differences lead one to belief that the margin of error in these tests is likely to be higher than we’ve previously thought.

If the margin of error is +/- 5% for one board, and you’re comparing two boards, that margin of error in that comparison could be as much as +/- 10%. That’s more than the differences being bandied about when crowning or dethroning these chipset champions, isn’t it.

In this particular case, since just about all the data shows the Via doing better, it’s probably safe to say the Via is better.

But how much better? If you determine a “good” benchmark for you, and one place tells you 2%, and another tells you 10%, just what are you supposed to think? If you like the features of Board A over Board B, you’ll probably swallow a 2% difference easily. 10% is not so easy to swallow. What do you do?

I’m not blaming the sites themselves, I’m sure that if we tested the board, we’d be right in there with them.

But we will be performing no service to the poor people trying to make some sense of this if we continue to toss these numbers out in isolation and not try to figure out what might be causing this.

Email Ed

Be the first to comment

Leave a Reply