• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Nvidia is only the hustler for old 3dfx tech

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Denis

New Member
Joined
Dec 7, 2007
The unreleased 3dfx voodoo5 /w DDR and quad gpu is nothing far from this 8800. Sure we have faster memory and smaller micron manufacturing now.
At the time when 3dfx released the voodoo5 in 2000 it had something like 82 GB/sec of memory bandwidth vs the 8800 series that is about 84 GB/sec .
Meanwhile all this time before the 8800 series nvidia never gave us a card that went past 60 GB/sec of bandwidth. Took them a whole 7 years to give us what was in the closet. Now they want to give us QUAD SLI ? why not just give us QUAD GPU on one card like 3dfx was going to give us. Nvidia is just milking consumers slowly and nicely.
 
why not just give us QUAD GPU on one card like 3dfx was going to give us. Nvidia is just milking consumers slowly and nicely.
Well, what is the power requirement for the 8800gtx? Take that times 4 (or 3 if you wish). Most power supplies won't be able to handle that at all.

To top that off, what about the heat output of said card?? No way it is even plausible.

Also, you are comparing a "never released card" memory bandwidth (do we have benchmarks that prove this?) to an 8800gtx. Come on, that can't even be compared :rolleyes:.

nVidia may be taking it easy, but isn't the point of a business to make money?
 
Well, what is the power requirement for the 8800gtx? Take that times 4 (or 3 if you wish). Most power supplies won't be able to handle that at all.

To top that off, what about the heat output of said card?? No way it is even plausible.

Also, you are comparing a "never released card" memory bandwidth (do we have benchmarks that prove this?) to an 8800gtx. Come on, that can't even be compared :rolleyes:.

nVidia may be taking it easy, but isn't the point of a business to make money?

QFT
 
I'm not one to defend the actions of GPU manufacturers, as they have been ripping us off for years and would easily crumble if anyone even tried to enforce anti-trust laws in the U.S., however there are things to say about this.

AMD and Intel are always releasing CPUs with technology that Alpha had designed years ago (and even deployed, in some cases). You're not up in arms over this, are you?

Any company can design an insanely powerful processing unit that trumps anything that will be done in the next decade - but that does not mean it's even physically (forget economically) possible to manufacture, cool, power, or house. Do you think it was even possible to pull that much bandwidth from AGP at that time, or at any time for that matter?

Don't be ridiculous. A complaint on this topic stems from complete ignorance of how these types of semiconductor designers work.

Either way, for a company that was in such serious financial trouble as 3DFX slightly before the nVidia buyout, they should have been using their time more constructively than designing cards they KNEW would not even be possible for nearly a decade.
 
Last edited:
That review states it has 10.4GB/s of memory bandwidth. And of course NV is hustling old tech from 3DFX, that's a big reason why one company buys another. Now whether they are literally using the same thing or have taken the ideas and implementations and updated them is what could be debated. If you look at reviews of just NV and ATi cards since 3DFX was sold I'd have to say that there have been continuous improvements of any intial or inspiring tech. That's not to say they don't try to milk things but that's a really big discussion becaues you have to take more in to account than just what's theoretically possible or even what can be made, you have to account for the market, competition, pricing and all that as well.
 
Clearly, it has been proven time and time again that memory bandwidth is not everything (as has been already mentioned). There are some cases where memory bandwidth is life and there are others where memory bandwidth is trivial. To prove this, just look at 8800GT and GTS performance, in less memory-intensive scenarios, they are both on par or faster than their GTX brother, however, in memory-intensive situations the GTX wins (and high res.).
This is an important distinction because graphics has gone through lots of changes. In the early days, memory bandwidth and fillrate was everything, then shaders and polygon-pushing power became more and more important. In a shader-intensive environment, fillrate and memory bandwidth mean less and less until you get anti-aliasing and anisotropic filtering along with higher resolutions. Today, things are much more segmented.
Games today are shader and geometry intensive - they benefit very little from memory bandwidth. However, the setup everyone wants is a 24" widescreen monitor at 1920x1200 and 4xAA/16xAF, which is HUGELY memory intensive. Thus, today, memory bandwidth matters little in the way of graphics horsepower, but have become essential to powering the experience gamers want. Don't get this mixed up. Some version of the Voodoo5 may have had 82 GB/s 7 years ago or whatever, but that doesn't mean jack. No graphics card from that era that would cost you even into the thousands of dollars would have been able to run today's games at anything faster than a few frames per hour (more likely not at all), they just wouldn't have the graphics capability and horsepower to do it. Nvidia and ATi have been solving the horsepower problem because that's more important to today's games - this hunger for memory bandwidth is relatively new and I expect that to be addressed in successive generations such as G90 and whatever else ATi has planned (if anything) for the high-end - Intel may also have something to offer in this realm if they ever enter the high-end market.
My point is this - fear not, because Nvidia and ATi know what is required of their hardware and what is practical of their hardware much better than any of us do, so just because they haven't improved on the memory bandwidth capable 7 years ago (supposedly), they have far surpassed (exponentially) the horsepower of GPUs 7 years ago.
 
Back