• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

The GeforceFX(NV30) will have less memory bandwidth than a Radeon 9700!!!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Overclocker456

Member
Joined
Feb 2, 2002
Location
New York
Boy what a shock this is... Clock speed isn't everything folks...

"The GeForce FX will be using 2ns chips. With DDR2 at 1GHz on a 128bit memory bus, the raw memory bandwidth comes out to 16GB/sec. The raw bandwidth of the Radeon 9700 Pro, with its 256bit memory bus and DDR running at 620MHz, is 19.8GB/sec."

That says it all.. So all the idiots that say that 1GHz DDR2 memory is going to destory the Radeon 9700, think again...
 
Yeah, I fail to see how the NV30 is going to smash the Radeon 9700, when the Radeon has more memory bandwidth. Should be interesting to see the outcome of this battle of the video giants...
 
I can't wait for the NV30 to come out. And the reason is, the competition should lower prices :D Hopefully when it comes out, the price for the 9700 can be a little more affordable. I mean I still would like to see it perform, but the price/performance competition with ATI is more of a reason to see it come out (at least to me) :)
 
Doesn't history like to repeat itself? Does anyone else see the resemblance between this and the Geforce 3 Ti200/Radeon 8500 war?

-PC
 
if it OC's massivly and with-out much cooling, it will be worth it to me otherwise.... poop... the ATi cards run Moby dock (OSX bar emulator type thing) like crap but... for beter gaming... i might take tha scrifice:( :D :eh?:
 
Overclocker456 said:
Boy what a shock this is... Clock speed isn't everything folks...

"The GeForce FX will be using 2ns chips. With DDR2 at 1GHz on a 128bit memory bus, the raw memory bandwidth comes out to 16GB/sec. The raw bandwidth of the Radeon 9700 Pro, with its 256bit memory bus and DDR running at 620MHz, is 19.8GB/sec."

That says it all.. So all the idiots that say that 1GHz DDR2 memory is going to destory the Radeon 9700, think again...

i don't think that's not a real problem 4 gf-fx...
whatever, we'll see benchs, soon!
 
you guys are funny, bashing on the nv30 before you have even seen a single benchmark, and i really think the idiot comment is taking it a bit far there overclocker456. and just so you know bandwidth isnt everything either, the memory interface is just as important as the bandwidth. look at the parhelia, twice the bandwidth of the 4600, but the 4600 smokes it. so we will just have to see how it turns out.
 
Cullam3n said:
Doesn't history like to repeat itself? Does anyone else see the resemblance between this and the Geforce 3 Ti200/Radeon 8500 war?

-PC

it could play out like that....but the only reason the 8500 lost initially was because of crappy drivers, and the 9700Pro has a lot better drivers at release than the 8500 had after 3 months into release lol.....
 
16GB/s x 4x compression = virtual 48GB/s, which > 20GB/s.

We'll see how this compression holds up, but the NV30 has a 6+ month lead on the 9700 and it will cost more.

It'll be faster. Duh.
 
snyper1982 said:
look at the parhelia, twice the bandwidth of the 4600, but the 4600 smokes it. so we will just have to see how it turns out.

Got link? I've seen nothing but Parhelia beating 4600 regularly with AA and/or AF on. Nevermind in multimonitor.

As for the raw memory bandwidth, I'm pretty sure the NV30 has that LMAIII(whatever they call it) tech going for it. Remember, the GF2Ultra had more bandwidth than the GF3 but the GF3 still beat it due to bandwidth saving features.
 
Link.

No matter how high the core is clocked, if the GPU isn't fed enough data, performance will suffer. A GPU this powerful needs a ton of memory bandwidth to operate at or near its top speed. NVIDIA's solution to this problem is three fold. NVIDIA has implemented a third generation version of their very efficient "Lightspeed" Memory Controller, which operates with four independent 32-Bit memory controllers, for an effective 128 Bits. In addition, GeForce FX boards will be populated with ultra fast DDRII-type memory with an effective clock speeds hovering around 1GHz! The combination of this high-speed memory and NVIDIA's memory controller offer about 16GB/s of bandwidth when operating at 500MHz. The final boost comes from a proprietary 4:1 lossless color compression scheme that effectively raises theoretical max bandwidth to 48GB/s.
 
its not how fast the memory is... its how efficient the architecture is.

LMAIII is supposed to be alot more efficient than Hyper Z
 
donny_paycheck said:
16GB/s x 4x compression = virtual 48GB/s, which > 20GB/s.

16x4=64 ... 16x3=48

Obviously Nvidia knows that they wont get 4x the bandwidth all the time from their 4:1 compression, so they advertise 3x (48GB/s). Different situations will allow for different levels of compression, its not a finite amount.

As I have said in other threads the 20GB/s the 9700 gets doesnt take ATi's compression into account (Hyper Z III) so comparing those 2 numbers is useless, the 9700 probably gets at least 40GB/s of "effective" bandwidth from compression but since nividia has just invented that "spec" no other company has given "effective bandwidth" (or whatever you want to call it) specs before.

So, 9700 has 20GB/s, GFFX has 16GB/s of real bandwidth

and 9700 has up to ??GB/s, GFFX has up to 48GB/s of compressed/effective bandwidth. No conclusion as to which is better can be drawn from that, I think only the benches will show one to be better then the other.
 
DaddyB said:


16x4=64 ... 16x3=48

Obviously Nvidia knows that they wont get 4x the bandwidth all the time from their 4:1 compression, so they advertise 3x (48GB/s). Different situations will allow for different levels of compression, its not a finite amount.

As I have said in other threads the 20GB/s the 9700 gets doesnt take ATi's compression into account (Hyper Z III) so comparing those 2 numbers is useless, the 9700 probably gets at least 40GB/s of "effective" bandwidth from compression but since nividia has just invented that "spec" no other company has given "effective bandwidth" (or whatever you want to call it) specs before.

So, 9700 has 20GB/s, GFFX has 16GB/s of real bandwidth

and 9700 has up to ??GB/s, GFFX has up to 48GB/s of compressed/effective bandwidth. No conclusion as to which is better can be drawn from that, I think only the benches will show one to be better then the other.

Well said my friend.. It seems Nvidia is "MILKING" the specs a bit much eh??
 
Good point DaddyB. If Nvidia is claiming bandwidth based on compression, what would the 9700's 'effective' bandwidth be? 2x, 3x of the raw? Anyone know?

-Rav
 
Hmm, this bandwidth compression stuff is annoying. We'll see how it ends up when we get out hands on them. It may be calculated 3:1 to give leniency for some stuff that can't be compressed well. Average compression in real world usage will probably fall between 3:1 and 4:1. The 48GB/s figure might be a little conservative, even.
 
Is the GeforceFX supposed to have better IQ then past Geforce cards?
 
The final boost comes from a proprietary 4:1 lossless color compression scheme that effectively raises theoretical max bandwidth to 48GB/s.

Theoretical max, not a conservative estimate. Maybe the reason it isn't 16 x 4 is because it isn't used to compress everything. Hence the "color compression" discription.

ps I have no basis for this, but I wonder if this compression is why Gf cards IQ isn't as good as Radeons. Did the Gf2 use an earlier version of this compression?
 
Back