• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Help with a school project

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

disk11

Member
Joined
Dec 30, 2003
Location
Charlotte
I am doing a project for my A+ cert class where I compare the current cards out on the market. I plan on bringing up the 2900xt so somewhat explain the problems AMD/ATI are facing right now. So I ask why did it perform so poorly? I know the high price didn't help, but was it a bad hardware design, bad drivers, some combination of the two?
 
I'd say it lives up to the hardware specs. It has about the same hard numbers as the 3870 and performs about the same in games. The only thing the 2900XT has over the 3870 is memory bandwidth, and apparently it didn't need that much. It didn't need the 512-bit memory bus either. Making that bus probably cut into their transistor budget, and the card used more transistors than the 8800 ultra, while performing vastly under par.

I'd say it was a bad decision to use the 512-bit bus, and a bad assignment of transistors at the 80nm machine process. The money didn't live up to the performance either.
 
The problem Ati has as well is the fact that they can't work closely with game developers from the start up (Nvidia: The Way It's Meant To Be Played).

I've also heard that it is harder to program for the R600 architecture so it's used at its optimal level - I think each shader has 5 units that can work parallel, but due to the software, each pass can be limited to 1-2 units which severely hurts performance. This becomes more problematic when you compare the shader speed of Ati and Nvidia (775 Mhz vs 1.5 Ghz).

I also believe that the R600 is limited by its 16 ROPs. Even though pixel fillrate isn't that important nowadays, Ati has stayed with 16 units since the R400 days, while Nvidia has gone up to 32 (I believe) for the better.

dan
 
Sound like a very interesting topic. Love to hear what you techs gota say.

Dan, do you have an FAQ/guide on what "5 units that can work in parallel", ROP and "bit bus" floating point calculation, etc is all about? I.e. What are the varies architecture specs and how do they impact the gaming performance?
 
Sound like a very interesting topic. Love to hear what you techs gota say.

Dan, do you have an FAQ/guide on what "5 units that can work in parallel", ROP and "bit bus" floating point calculation, etc is all about? I.e. What are the varies architecture specs and how do they impact the gaming performance?

This should help some: http://en.wikipedia.org/wiki/Radeon_R600

So, it looks like an inefficient architecture and nVidia raising the bar signficantly caused the 2900s problems. Thanks everyone.
 
Let me get this straight..

So Nvidia has 128 independent unified shaders and ATI has 64 independent unified shaders. Each one of ATI's independent shaders has 5 units that can work in parallel, but programmers only utilize 1-2 of these units in each independent shader, effectively giving only 128 shaders?

Most of the new ATI/Nvidia cards have 16 ROPs, but Nvidia has upto 64 TMUs while ATI only has 16.
 
Let me get this straight..

So Nvidia has 128 independent unified shaders and ATI has 64 independent unified shaders. Each one of ATI's independent shaders has 5 units that can work in parallel, but programmers only utilize 1-2 of these units in each independent shader, effectively giving only 128 shaders?

Yeah, the issue being roughly that Ati's architecture is hard to program for (for optimal use).

I wouldn't do a direct Nvidia - Ati shader comparison though, you might want to study a bit further the capabilites of each.

dan
 
Back