The long-awaited HD 4850 video card has debuted. Overall, it’s not quite as good as nVidia’s 9800GTX, but with few exceptions, it’s pretty much even with it.
nVidia certainly thinks so, too, that’s why they’re dropping the price on the GTX to 4850 levels and preparing a 10%-higher-clock GTX+ card.
The CPU side of AMD needs to take note of all this. AMD doesn’t necessarily have overall performance leadership to make some decent money on its product, but it does have products that are as good or better than the competition’s midrange-or-better products. That determines the price AMD will get for any mass-produced product it will make.
You may say, “Doh,” but that’s precisely what AMD has not been doing in the CPU field. When none of your products in a product range are as good as the competition’s, you end up having to charge considerably less and/or your competition can charge a premium for its product.
In contrast, the 4850 gets by some of nVidia’s older mid-range products, forcing nVidia to push the GTX down in price to hold the fort at $200.
True, $200 isn’t the greatest price in the world for your high-end video card, but you have to start somewhere.
But . . .
nVidia also announced that it is enabling GPU acceleration for PhysX for its upper-end cards now, and eventually for all GeForce 8 and 9 cards.
How significant is this? In the short-term, very little, long-term, could be quite a lot.
While many games support the PhyX API in one form or another, they do not currently support this specific feature. Right now, the only game that supports it is Unreal Tournament, and while it helps an awful lot when used, it doesn’t help too often.
Then again, game developers aren’t sadists. If too much physics brings a game to a crawl, they’re not going to put too much physics in their scenes.
No doubt some game developers that currently support the API one way or another will eventually issue patches to support this. However, the biggest impact will be on future games built from the ground up with a lot more physics.
And what is AMD’s answer to all this? Well, they watched Intel and nVidia buy Havok and Ageia (the two players in the physacceleration game) mostly because they couldn’t afford/didn’t want to pay for it. Well, now they’re going to have to pay Intel.
Worse, it looks like the now Intel-owned Havok has been working heavily on CPU-, not GPU-based physics acceleration. That makes perfect sense given Intel’s upcoming “swarm of CPUs” Larrabee, not as much sense for AMD’s CPUs.
What about GPU acceleration? Well, let the Havok people tell you:
“. . . Havok and AMD intend to “investigate the use of AMD’s massively parallel ATI Radeon GPUs to manage appropriate aspects of physical world simulation in the future.” According to David O’Meara, managing director of Havok, “the capabilities of massively parallel products offer technical possibilities for computing certain types of simulation. We look forward to working with AMD to explore these possibilities.”
I see. Does that Havok guy sound like he’s in any kind of rush to do this, or does this sound more like “put it on the back burner?” Even if you’re not so suspicious, these folks are just starting to look at beginning something nVidia’s already done. I think somebody’s way behind the curve here.
None of this should preclude you from buying an HD 4850 for other reasons, unless you plan to hang on to it for a very long time, but all this makes you wonder about the future of the former ATI.