• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Differences in hardware of Video Cards?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

scm007

Registered
Joined
Jul 9, 2004
OK I am very interested in getting the best of the best on the video card market, but from what I've read of people "softmodding to xt PE with such and such a clock speed" what the heck are the actual physical, hardware differences between each line of GPU's? If I can just buy the GT and then softmod and overclock to be exactly the same as the 6800 Ultra Extreme, why not?

So, what are the hardware differences between the x800 pro, the x800 xt, the 6800gt, the 6800 ultra, and the 6800 ultra extreme? Thanks.
 
The x800Pro and the XT are both physically the same card I belive (there may be a small difference I'm not aware of though, so I'm not 100% sure). The difference is that the Pro probably didn't pass quality control as an XT (all 16 pipelines didn't work, or at XT speeds). There is a chance that the card DID pass quality control as an XT and was released as a Pro instead to make up for demand. This chance is fairly small from what I've seen though.

The 6800GT, Ultra, and Ultra Extreme again are all the same card (this I'm nearly 100% positive of). The difference between these cards again is either them not passing quality control as a better card, or nVidia simply needing to keep up with demand for lower end models. I do not know the sucess rates on overclocking these cards to higher models.


JigPu
 
Usually card models scale by the type of memory they use and the clock speeds they're set at. Ex: 6800ultra uses very fast GDDR3 ram and is clocked the highest of the series . . . a 6800gt uses slower ram and is clocked a little lower.

Also Nv/ATI scale the rendering pipelines in their GPUs.

I doubt you'll be able to get Ultra Extreem clocks from a 6800gt but from a price/performance perspective it's a great buy: 100 dollars less than an ultra and can be oc'd to nearly the same speeds.

Physically the PCB layout between the different versions of the cards is very similar.

Check out some of the other posts for more specific information on the card you have your eyes on.
 
The X800 Pro uses 2ns GDDR3 and a core of 475MHz and a RAM speed of 450Mhzx2. It has 16 pipes but 4 are laser-locked so only 12 are active. The X800 Pro VIVO (for now) has 16 pipes that aren't laser locked. You just need the Gigabyte BIOS to unlock the 4 stagnant pipes. Most have 1.6ns RAM (but not all). The X800 XT PE is a 16 piper with 520MHz core and 560Mhzx2 RAM rated at 1.6ns. Also, ATI's cards have 'only' 160 million transistors and do not support Shader Mark 3.0 'only' Shader Mark 2.0b.

The GeForce 6800 GT also uses 2ns GDDR3 whilst the GeForce 6800 Ultra uses 1.6ns GDDR3 and both have 16 pipes at 400MHz and 450Mhz respectively. It has support for Shader Mark 3.0 (whoopie!!!) and have better OpenGL speed whilst the X800's have better DirectX9 speed. Also, I believe the they have, what?, 220 million transitors or so? :drool:
 
False Christian said:
The X800 Pro uses 2ns GDDR3 and a core of 475MHz and a RAM speed of 450Mhzx2. It has 16 pipes but 4 are laser-locked so only 12 are active. The X800 Pro VIVO (for now) has 16 pipes that aren't laser locked. You just need the Gigabyte BIOS to unlock the 4 stagnant pipes. Most have 1.6ns RAM (but not all). The X800 XT PE is a 16 piper with 520MHz core and 560Mhzx2 RAM rated at 1.6ns. Also, ATI's cards have 'only' 160 million transistors and do not support Shader Mark 3.0 'only' Shader Mark 2.0b.

The GeForce 6800 GT also uses 2ns GDDR3 whilst the GeForce 6800 Ultra uses 1.6ns GDDR3 and both have 16 pipes at 400MHz and 450Mhz respectively. It has support for Shader Mark 3.0 (whoopie!!!) and have better OpenGL speed whilst the X800's have better DirectX9 speed. Also, I believe the they have, what?, 220 million transitors or so? :drool:

You are correct except for the Shader Mark, that's a benchmark, I think you meant to say SmartModel 3.0. :)
 
Back