Why Apple Isn’t Taken Seriously
In the PC world, when companies like AMD or Intel benchmark their products, they usually use standard benchmarks that are readily available to others. They describe configurations in enough detail so that, again, others can duplicate the effort.
When they deviate or modify a standard benchmark to some degree, they tell you about it, and show the benchmark with and without modification.
At least somewhere in the fine print, you’ll find a reasonable degree of equipment and configuration data, enough to see whether or not the equipment was at least relatively equal.
Perhaps most importantly, equipment get reviewed one way or the other by a lot of people. Some are good, some are not so good, but if a manufacturer makes an outrageous claim for a product, or engages in some skullduggery, somebody will call them on it, and the results often aren’t pleasant for the called-out.
If not else, the Intel-haters will scrutinize anything Intel does, and the AMD-haters will do likewise.
It’s messy, it’s chaotic, it’s often ugly, but it’s the real world, it works, and PC companies have to live with that world and temper their claims knowing that they’re going to be scrutinized and verified by at least somebody not inclined to give them the benefit of the doubt.
This is the real world. Not the Apple world, though.
In the Apple world, benchmarks are meant to be swallowed, not checked. Apple benchmarking meets few to none of the verification standards in the PC world. The testing is rarely duplicable. Apple almost always provides little to none of the information needed for independent people to verify the findings.
Let’s look at the spec benchmarking for a bit and see how Apple mishandled this.
Spec is a benchmark. Manufacturers essentially take their equipment, test it against Spec, tweak it to get the maximum results with spec’s rules, then submit their results (which includes a great deal of configuration information) to spec. If it’s OK by spec’s rules, they then post official results.
When people cite spec scores, everyone cites the official results from the spec site. This makes a lot of sense. You go to one place, and you can see not only the scores, but all the configuration data, too.
This works fine for everyone. Except Apple.
You see, if Apple did what everyone else does, they could hardly make any claims about being the fastest desktop computer in the world based on their scores compared to the official spec scores. Even a Macster would have problems saying 1100 or 1200 was less than 850.
So they created their own little world and set of rules where (surprise, surprise) Apple wins, and ignored the real world scores and results.
And they expect the world to take them seriously.
The Bottom Line
If you want to play the game, you have to follow the rules.
So long as Apple doesn’t follow the established rules of the PC industry,
their performance claims won’t be taken seriously outside of the Macghetto.
Spec is a sort of race track doing time trials. Everybody tweaks and tunes their car to the max according the rules, and then they run to see which is the fastest, and the results get posted on the leaderboard.
This is hardly unfair or unreasonable. If you don’t like those rules, find another track.
What Apple wants to do is to erase the leaderboard, and only count the scores of other cars that they (de)tuned.
Imagine you wanted to race somebody to see who had the faster car, and that person told you, “I have to tune your car first.” Would you not laugh in his face?
Well, that’s essentially what Apple is trying to do.
Get real. Now that’s bias! Those aren’t the rules of the track. If you’re too slow, work on getting your car faster, not the other car slower.
If your car isn’t tuned up enough, that’s not the other guy’s fault. It’s your job to tune your car up to the max, not detune the other guy’s.
And if you don’t, can’t, or aren’t quite ready to tune your car up, then you shouldn’t race, or not race until you’re ready. You don’t make up your own rules and claim victory.