Rats on a Treadmill?

I got this intelligent email from John Tumminaro.

I think you might be being a bit hard on the GeForce FX card. Sure, it doesn’t show a huge amount of benefit in conventional testing, but what are the benchmarks really telling us anymore?

If you look at the battery of tests that most of the hardware sites are running, the ones with the most proven engines are STILL DirectX 7 games. Jedi Knight 2, Serious Sam 2, Quake III (my God, how old is this game now?) and more.

The only relevant tests I can find for even DirectX 8 are UT2003 and Aquanox (as in they are the only games that make any serious use of DX 8 shader technology) and Aquanox is anything but proven. UT2003 fares better, but without much else to compare it to, this doesn’t help much.

The rest of the DX8 tests are purely theoretical; Codecreatures, 3DMark, etc.

This still doesn’t even address the fact that we have no way of knowing what the DX 9 capabilities of EITHER of these cards are since we still don’t have even a theoretical benchmark to go by.

I’m not defending Nvidia’s decisions with the GeForce FX; the noise/cooling situation is a joke and the lack of a 256-bit memory interface seems to be really holding this card back.

What I am saying (and what everyone has really been saying lately) is what the hell is the relevancy of benchmarking these cards anymore? I doubt anyone outside of the 3D industry has any idea what either card is capable of in its true application which is to produce quality next-gen shader effects.

Some people might be arguing that since these games aren’t out right now, why should they care about this? Well I think that is the real problem for the people at ATI and Nvidia that design these things as well.

Consider that creating a high powered DirectX 8 accelerator when you have 120 million transistors to play with shouldn’t be too much of a challenge if you sacrifice the next-gen performance by only adding very barebones compatibility (Xabre perhaps?) and make the chip perform for what it will be benchmarked for instead of giving gamers compelling future features. After all, by the time anyone knows different the cards would already be sold.

I think it is surprising that with all the chicken and egg acceptance problems we have seen in this industry that graphics companies continue to plow forward in adding more features. It isn’t enough to simply make them faster, they must also have all the complex technology that will never show up outside a tech demo for years.

Why do they continue this, I wonder? If everyone wants faster performance now, why not sacrifice DirectX 9 features that will never be used until the card itself is obsolete? I’m not trying to get this strategy to happen, I’m just wondering if you have any insight into why the 3D industry continues to press ahead with whiz bang features that will never be relevant in the expected lifetime of the cards. After all, DOOM III isn’t going to look its best or run at really great framerates on a first-gen NV20 will it?

Sorry, I didn’t mean to rant but you seem to have some pretty sharp insights and I thought you might be able to answer for us this question that no one seems to want to touch with a 30 foot pole.

By far the biggest reason why I don’t like the FX Ultra is not the relative lack of performance per se, but the heavy environmental price you pay for it.

One of the biggest developments over the past year has been the shift towards a quality computing experience. Even a whole lot of the loons don’t want to put up with a noisy, hot system anymore.

The FX flies in the face of that strong trend, and I think that’s going to be the major reason why it will flop, not because of performance.

Pretend you’re Dell. You think you’re going to want to offer this buzzsaw as an option? No, you’ll go with a R300/350 or the low-end FX.

That being said, questioning the addition of features into cards that mostly will never use them is a very reasonable question.

At first glance, video cards may seem to be joining CPUs in a sinking boat.

CPUs have offered sufficient power to handle the average person’s needs for a while now, and there’s little prospect of that changing radically any time soon. Eventually, they will, but the need for additional power by the average person will probably jump erratically by leaps and bounds, not by steady incremental steps.

Video cards are not in that position at all. When will the gamer stop wanting more? When a perfectly working holodeck goes on sale for the right price. This leaves considerable room for improvement. 🙂

More importantly, incremental improvements help. You can make games better in ways that show immediately, while you really can’t do the same sort of thing with email.

Why do video card manufacturers keep piling on features that most of the time will never get used? The real answer lies not with the makers of the product, but the users, namely the game makers.

They’re faced with two problems.

First, the average game buyer doesn’t have the latest and greatest, or even so good. You have to fish where the fish are, so the focus for a typical game developer has to be the mediocre middle.

They may add some enhancements for the video elite, but they can’t spend all or even most of their efforts making a game stellar with an R300 or FX while leaving those with a GF2 MX playing the twenty-first century equivalent of Pong.

It’s only when the R300/FX features are found in the future equivalent of that GF2 MX that they’ll go whole hog using its features.

Second, making a game has become a long, drawn-out process. The days are long gone when you could crank out a game in two-three months; now it’s more like two-three years.

Games nowadays involve large teams of programmers working on all different aspects of the game, and since mindmelding or Borg-like assimilation isn’t available yet, there will inevitably be failures to communicate.

Nor can you just rip everything up every six months and start anew when some new hardware feature comes out. You bend a bit and do what you can, but you’d never get a game out if you kept starting anew.

Programmers are human, too, much as they’d like to deny it. They may pretend they’re Mr. Data, but they can’t absorb new video programming language instantly and then immediately use them flawlessly. It takes time and more than a few errors to get comfortable with a new way of doing things.

Unlike Mr. Data, programmers can be lazy, too. Being human, they can often choose the easy way out.

Sticking all these features in new video cards serves as a bit of a goad to get those folks to eventually start using them, if for no other reason that fear of competitors less lazy.

If you waited until you could put these things in $50 cards, that still doesn’t help those who bought cards before then (including those who bought more expensive cards). The programmers would still probably take two years to get around to using the techniques. And marketers would only be left with “faster, faster, faster.”

I’m afraid there’s no better answer to this. The only helpful hint is to not pay much attention to new features unless you plan to own the card for a long time or you know a particular feature will be used in the not-too-distance future by a must-have game.

Email Ed

Be the first to comment

Leave a Reply