The Silly Wars Start . . .

There’s no doubt Intel and AMD are going to get into a big fight to win the hearts of Christmas shoppers and others this fall, but from the first shots, it’s beginning to look like a catfight.

A few days ago, I ran across somebody stupid who said something along the lines that PIVs were going to need liquid nitrogen cooling at some point in its rampup.

This is of course completely ridiculous, and given the furnaces AMD is putting out, a case of the pot calling the kettle black.

Though going to .13 micron will help, I don’t doubt that a 3.5Ghz PIV will be a pretty hot puppy, likely just as hot as some of the current TBirds.

But that isn’t good enough for that particular writer. Being hot isn’t damaging enough, it has to need liquid nitrogen cooling.

A More Sophisticated Case

I’m saddened to have to point out another particular case as the first of what will no doubt be more to come.

Essentially, the article goes as follows:

  • SysMark is a product of BapCo.
  • BapCo is in Intel’s backpocket, hence hopelessly biased.
  • Want proof? Look at this Photoshop benchmarking! What more do you need?

    Now if you don’t know any better, this can look like a pretty convincing argument.

    It isn’t.

    I don’t doubt Intel likes benchmarks to go their way. Nor do I doubt BapCo is susceptible to pressure. One of the more bemusing experiences when running SysMark2000 is
    seeing a word processing document purporting to be a letter from BapCo to a potential sponsor offering as one of the benefit of sponsorship the ability to influence the benchmark.

    But where’s the beef? Slurring doesn’t end the story. If you say the benchmark is flawed, show it.

    There’s one rather huge problem with the “Intel rules BapCo” conspiracy theory. If this were so, then why does the PIV keep getting its ass whipped in Sysmark by TBirds, even when the Athlon spots it hundreds of Mhz AND SSE?

    How can this be? If BapCo is bought and paid for, Intel should sue for not getting its money’s worth.

    More importantly, how could any reasonable, objective claim of horrible BapCo bias not consider or even mention this highly inconvenient truth?

    The only “proof” of bias presented is Photoshop (and we’ll see later there’s an excellent reason for that).

    Is Photoshop some obscure program nobody uses? No. It’s a major popular image editing program; the program of choice for many if not most professionals in the field.

    So how can choosing such a program be unreasonable, much less evil? Maybe not politically correct in front of certain audiences, but no more than that.

    What’s so bad about Photoshop? The great evil apparently is that it is heavily optimized for SSE and not for 3DNow.

    If you want to consider that evil, fine, but shouldn’t Adobe rather than BapCo be called the evil ones?

    In any case, what Photoshop was or was not optimized for has hardly been a secret.

    If I’m running Photoshop, I just want to know which machine will run it faster. If Intel systems run Photoshop faster because of SSE optimizations, that’s good for me and something I want to know, no matter how “unfair” it is to some other side. I don’t want to spend extra time waiting to
    get my work done to fulfill somebody else’s agenda. It’s my money. I want the best choice for me, not you.

    The only potential point that could be raised here against BapCo is that the Photoshop script they use is unrepresentative of typical Photoshop use and far too heavily weighted towards heavily SSE-optimized filters.

    However, that is a far different and more subtle point than “How dare Adobe use only SSE optimizations!!” The only potential legitimate accusation of bias by BapCo is precisely the one not addressed.

    There may be some merit to that point. Probably the best way to find out is to run an alternative set of Photoshop benchmarks and see how well those percentages hold up, and we’ll do that within the next couple days.

    However, if you’re testing a program that largely consists of SSE-optimized filters, you would have to try pretty hard (and with an agenda just as badly biased) to come up with a script that tested Photoshop without using them.

    We suspect we may find the margin of improvement to be somewhat less than that reported by SysMark, but we’ll be really surprised if we find little to no difference. We shall see.

    Does Photoshop do much better on an AthlonMP than a TBird? Sure does, as we told you almost two months ago (it’s towards the end of the article).

    However, we also told you how all the other applications did, too. Photoshop was the only app which showed a huge increase. Two others
    (Premiere, Windows Media Encoder) showed about an 8-9% improvement. The rest showed little to no improvement.

    So the article points out the only application where SSE optimization makes a big difference, and tries to pass it off as typical.

    Actually, this sort of result is fairly typical of what happens in Sysmark2000. One or a few programs respond a lot to a particular change, and the rest show little to no effect.

    For example, SSE does nothing for CorelDraw, but extra memory or FSB sure does, while it’s the opposite for Photoshop. Should we start suspecting a grand conspiracy between BapCo and memory manufacturers, too?

    Or should we presume until we get far better proof than we have so far that Corel Draw is optimized to use a lot of memory, and the others aren’t?

    When you merge the apples and oranges together into the Sysmark conglomerates when comparing a TBird and AthlonMP (and not all of this is SSE, though that’s pretty much the case for Photoshop), you get about a 4% increase in the overall ranking, and 8% in the Internet Content Creation section.

    This is not exactly huge, and usually not enough for the PIV to beat an Athlon in any Sysmark category or subcategory (I’ve seen one exception where a PIV nudged an Athlon).

    Other Agendas About?

    The article says some nice things about CSA Research, which has come out with its own benchmarks. Unfortunately, it only works in a Win2K environment, and
    essentially just tests MS Office applications. Not exactly a comprehensive benchmark.

    However, in all fairness, the author doesn’t push that benchmark as an alternative but rather his concept of a Comprehensive Open-Source Benchmarking Initiative (or COSBI, as he calls it in the article).

    This was something he suggested in an article he wrote about four months ago for another website.

    Now I can’t say I love BapCo, or think an open-source benchmark is a bad idea, but a prime beneficiary of the latter, whether directly or indirectly, would be the author (who was quite coy about connecting his name with COSBI in that particular article).

    The Damn Duck Quacked!

    This article essentially damns a duck for quacking. You have an SSE-optimized program, it’s going to do better with an SSE-enabled processor than one that isn’t.

    If you want to get mad at the duck for not barking, too, go talk to the duck’s mother, not the guy recording the quacking.

    Now if the guy’s recording makes the quacking sound louder than a lion’s roar, that’s what you go after, but then you have to prove that.

    A good way to test the reasonableness of a position is to see what solves the problem. Do we get rid of Photoshop because it’s not politically correct in front of an AMD crowd? That’s ludicrous.

    Do we come up with a “fair” script that doesn’t take advantage of SSE, no matter how unrepresentative it is of actual use? That’s ridiculous, too. Since we’re out to be “fair,” why don’t we look at those programs where the Athlon whomps the PIV and even the playing field there, too?

    The Athlon holds up very well on its own merits. It doesn’t need affirmative action from its “friends.”

    For at least sane people, the purpose of a benchmark is to provide a reasonably representative test with which to compare a product to others. Not to find just those where your side wins.

    Only fanatics want that, and you don’t want fanatics telling you what to buy.

    Fanatics Are Bad For You

    Fanatics believe for whatever reason that the triumph of their cause is more important than any other truth or reality.

    To the reasonably informed, thinking person, they’re easy to spot, because they always go too far, and they never see (sometimes blindingly obvious to those who can see) the counter from the other side coming.

    I’m afraid we have a case of that here. What’s sad is that there is some truth and some reason for reasonable doubt on this issue, but instead of meticulous analysis, we get slurring and incomplete information and omissions of inconvenient truths and pandering to an audience’s prejudices.

    Now the idea of an open-source benchmark is a good one, but not at that price.

    It’s a shame, but I know there’s going to be a ton more of the same from both sides from a variety of sources.

    Just remember, and beware of those who would advance their cause at the cost of yours.

    Email Ed

  • Be the first to comment

    Leave a Reply