Intel has a DDR board, the 845, but Intel isn’t going to turn the current SDR pumpkin into a DDR coach until after Cinderella shows up at the ball.
Via has a DDR board, and Intel keeps saying “we’re going to huff and puff and blow your house down.” When they’re not fighting over balloons, that is.
Meanwhile, Intel and the mobo manufacturers are getting ready for their version of Cookie Jar.
“Who built the mobo from the Via jar?
Who me? Yes, you! Couldn’t be! Then who?”
Somehow SiS has managed not to get Intel mad, but don’t worry. If they sell a bunch, Intel will just find something new to act out. I’m betting on Jack and the Beanstalk, myself.
I think Rambus is off in some corner somewhere picking pedals off a flower and saying, “He loves me, he loves me not” while wondering why the love potion doesn’t seem to be working anymore.
And what are the stakes? Oh, just billions and billions of dollars.
Is there an adult in the house?
Maybe I was wrong about television not creating a generation of mentally stunted maniacs. 🙂
As If Things Weren’t Confusing Enough
We’ve said it before, and we’ll say it again: there is no point to looking at PIV until the .13 micron Northwood comes out. Why settle for 2Ghz now when you’ll likely get 2.5-3Ghz in a few months?
What we have not said before, but we think we’re beginning to see, are shifts in benchmarking.
Let me give you an example:
Take a look at the SysMark2001 scores for a 1.5Ghz PIV using SDR, DDR, and RDRAM. Look at the Office Productivity SysMark. You’ll see DDR and RDRAM do a bit better, but not much (2-7%). These results are typical of what we’ve been seeing the past year or so. Note that it uses Word, Excel, PowerPoint and Outlook.
Now go to the next page and look at the results from Office Bench. It uses Word, Excel and PowerPoint, too. Suddenly the difference between SDR and DDR jumps to 33%.
There’s something screwy somewhere. Which is the more accurate benchmark? Does DDR really help a ton, or was the benchmark tweaked to help DDR along?
There are also some less clear indicators which indicate that the PIV does relatively better vs. the Athlon in OfficeBench than other commercial benchmarks. Again, is it reasonably legit, or not?
Whether good or bad, a different approach is yielding much different results.
In a few months, we’ll no doubt see new versions of some of these benchmarking programs, and I would not be surprised to see this trend continue.
We saw something like this a few years ago between the PII and K6-2. Benchmarks initially showed them fairly even in non-floating point oriented program. Then the benchmarks got changed to include more task-switching, and the K6-2 fell behind.
We Have To End Brainless Benchmarking
This high-performance community has gained experience since the PII/K6-2 days. It’s gotten very good at handling nuts-and-bolts details and coming up with numbers.
What it is not generally good at is interpreting and analyzing the numbers it comes up with. It’s not good enough just to crank out numbers.
Unfortunately, the article cited above is a good example of this. There’s this huge difference between pretty similiar tests, but it wasn’t noted or maybe even noticed.
There seems to be a pretty common belief that raw numbers are “more real” than any interpretation; that they somehow can’t be manipulated.
ROTFLMAO! Whoever controls the numbers controls the results.
Nor is the answer to blind acceptance blind rejection. Denying the validity of any benchmark just because you don’t like the results is just as childish as the games I’ve described above, and leaves you open to another Pied Piper.
We will have to understand our tools better.
I think it is likely that come early next year, we’re going to see DDR and/or PIV systems doing amazingly well by today’s standards. How much of it will be legitimate? How much of it will not?
The only thing we can say for sure is that it’s going to take a lot more to figure this out than we’ve (including ourselves) have been willing to give this area so far.
Discussion