I’ll briefly address the points raised in the article, but let me first point out what isn’t there: any answers to my points about why he didn’t mention any benchmarking evidence which at least tempered if not contradicted his assertions. He does get around to saying that the P4 did poorly in SysMark2000, but if he had said that in the first article, then his whole assertion would have made little sense.
If I were avoiding “mentioning the history of BAPCo and the other details in my article” what does “BapCo is in Intel’s backpocket, hence hopelessly biased” mean? What does “I don’t doubt Intel likes benchmarks to go their way. Nor do I doubt BapCo is susceptible to pressure” mean? I even made some personal observations on the subject.
I fail to see why a benchmark becomes useless if optimizations aren’t mentioned by the benchmarker. Optimizations are certainly made public by Adobe. All any reviewer has to do is look around a little at Adobe’s website to find that out. To say that a benchmark is somehow invalidated because the reviewer wasn’t spoon-fed specifications of a third-party product is silly.
As per benchmarks that don’t let you “see under the hood” I’ve spoken about that before. However, none of the commonly used commercial benchmarks have ever let you do that, for instance, the ZdNet benchmarks. They use Photoshop, too. Why not jump on them, too?
Per P4 performance in SysMark2001, I’ll point out that Mr. Smith based his whole article on Sysmark 2000, not Sysmark 2001. Now all of a sudden, he’s talking about a much different test, one not used in his initial article.
There are radical differences between the two scores that can’t be explained by somewhat higher speed or configuration. I don’t know why there are such differences, but that might be a lot more productive to look into than Photoshop using SSE.
If the lower of the two scores is correct, then there is little difference in the performance gap shown by SysMark2000 as opposed to SysMark2001.
If the higher of the two scores is correct, then there is indeed something interesting going on in SysMark2001, not SysMark 2000, and maybe somebody ought to take a closer look at that.
It’s conceivable that since different OSs were used in the two tests (low score: Win98, high score: Win2K), that the OS makes a big difference in SysMark2001.
Yet another possibility is that the “low” score came from an April review and the “high” one came from a July review. There have been several patches to this program. Perhaps something interesting went on in the patches.
As you can see, there’s a number of possibilities here. If somebody wants to prove evil, here’s a “to-do” list.
Per any attempt to “bury the connection between Bapco and Intel,” I don’t hang people on just innuendo or “connections.” That’s McCarthyism, and Mr. Smith is old enough to know what that means.
I’m more than open to consider solid evidence showing that Bapco unreasonably slanted benchmarks in Intel’s favor. I didn’t and don’t say BapCo and Intel must be innocent.
But I have a higher standard of proof than Mr. Smith. Innuendo isn’t good enough for me, and that’s where Mr. Smith and I differ. That’s why I objected to the article. That’s why I’m “angry.”
It’s not like coming up with solid proof isn’t impossible. It just takes more work than looking up an address.
I have an AthlonMP sitting in front of me. Joe drove down from Connecticut to hand it to me. Later today, I’m going to run an alternative set of Photoshop benchmarks created by somebody who did not come from or hang out with Intel: PS5Bench.
By doing that, by doing real testing, I should get some sort of indication whether or not the Bapco scripts for Photoshop (or their possible weighting) are really cherry-picked or not. That’s the kind of evidence I want, and you should, too.
Even given that, though, Photoshop is just one test out of a dozen in SysMark2000. Even if everybody is guiltier than Satan and the selections were disgustingly cherry-picked, it would have only made a few percent difference in a test the PIV lost badly in, anyway.
Pretty lousy cheating if you ask me. Maybe they got better at it in SysMark2001; there at least seems to be better reason to believe that. But then you test SysMark2001 and write an article about that.
I did make an error in dating, Mr. Smith wrote the COSBI article in April 2000, not April 2001. This is a distinction without a difference, though. The concept of COSBI is Mr. Smith’s baby, and he should have pointed that out in the article while bashing alternatives.