• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

NVIDIA GFFX Cards GET Major Uplift from Microsoft!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
its pretty much common knowledge about the 32bit fp precision. its not that its malfuntcioning, its the fact that it IS using it, ati is only using 24, which is the dx 9 specification, and since nvidias hardware dosent have an option for 24, it runs at the next best thing, 16bit. i believe that is correct, though you may want to take that with a grain of salt, its been a while since i looked into the fp precision.
 
No thats cool. I was seriously wondering if the 32-bit precision was failing at the hardware level.. Thanks for the info
 
dguy6789 said:
... And at circuit city, a 1.6 celeron boots faster then a 2400XP. everywhere i go, intel boots up faster for some reason. the thing goes accross the screen once, and only once on every single intel machine, then a black screen for liek 4 seconds then the desktop. I dont know how, but indeed, intel boots faster in win xp. the 500 boots faster then a 1600+ with a faster harddrive. WHY!!!??

Doh!!! Maybe because the Intel machines are in Standbye mode and not really off all the way.
 
Atari said:
Hey all!

Just some news for Nvidia users to rejoice about, and some news for ATI people to ***** about.

http://babelfish.altavista.com/babe...dware.com/actu/news/11346.htm&lp=fr_en&tt=url


Looks like DirectX 9.1 will provide FX card with pixel Shader 2.0 performance at 32-bit without any performance loss!

1/ First off , like others in this thread , I'm not so sure that I trust the source .

2/ Nvidia's problem is not necessarily slow FP32 but the fact that on its archietecture and just about any card out there at present it is impossible to get FP32 to run faster than FP24 or FP16 . Problem is that FP24 is the standard so Nvidia is forced to run either above or below spec . The only real way that DirectX can help Nvidia is to allow FP16 more often , but that would be a backward step . Or to incorporate some of NV's special features into DX9.1 as full or optional specs .

3/ Regardless of the floating point precision used ( 16 or 32 ) Nvidia has shown themselves to be too slow . Even when cheating and using 16bit or integer ops in places that they should be using FP24 they often still lose ( see HL2 , Halo , Tomb Raider benchmarks ) . The real issue here is that the FX series can only process the fraction of the FP operations per second that the DX9 Radeon lineup can . This is due to the number of floating point units in the respective chips . Only a few of the units in the FX line can handle DX9 full precision ( in their case FP32 ) and even when going down to FP16 they still don't have the throughput of ATI's products . In fact ATI engineers admit that their drivers aren't even properly optimised yet since there is a shader unit in their cores which hasn't yet been enabled by the driver and only works in a few situations with the current drivers ( expect more performance when this comes onstream ) . This all basically means that software cannot solve the FX's hardware problems unless DX9.1 goes backward which is very unlikely especially with DX9 games out already .
 
i'll believe it when a reputable source that actually speaks english say's it... until then... MWHAHAHAA
 
Back