• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

GTX470 and GTX480 Clocks ?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Big announcement:
"We're canceling fermi but the next one is EVEN BETTER and will be released in 3Q 2010! So who cares? It'll be awesome!"
 
lol i guess it dosent say its a good or bad announcement :p
Big announcement:
"We're canceling fermi but the next one is EVEN BETTER and will be released in 3Q 2010! So who cares? It'll be awesome!"
 
As I've said myself.. It's "potentially" and "if they make target clocks with the full 512 SPs."

I stand by the guesswork. A GTX 480 - assuming they can make clocks, and assuming all 512 shaders are working - will perform close to a 5970. Whether Nvidia can actually produce the cards is another matter.

Yes but what you're doing is providing a guess for the maximum theoretical performance and insinuating that will be its practical performance.

If this was how it worked, the 4870X2 would be faster than the GTX295 in everything, comparing raw theoretical GFLOPS performance.

The guesswork is a bit too sensationalist. Let me put it to you in another way:

"FERMI could be so fast that ATI will only be able to compete by releasing the 6870".

Now, even though ATI will be bringing out the "Northern Islands" GPUs later this year, FERMI was specced and finalised long before the 5800 GPUs were released. At this point Nvidia can only tweak clock speeds, or cut down the cores, they can't and haven't gone back to the drawing board with it. FERMI is set to run as fast as it was specced Q4'09, or slower.

Remember, FERMI is causing Nvidia problems. This points to two possible eventualities, either FERMI is very fast but very expensive and supply is very limited, or we will see a cut down version.
 
Yes but what you're doing is providing a guess for the maximum theoretical performance and insinuating that will be its practical performance.

If this was how it worked, the 4870X2 would be faster than the GTX295 in everything, comparing raw theoretical GFLOPS performance.

The guesswork is a bit too sensationalist. Let me put it to you in another way:

"FERMI could be so fast that ATI will only be able to compete by releasing the 6870".

Now, even though ATI will be bringing out the "Northern Islands" GPUs later this year, FERMI was specced and finalised long before the 5800 GPUs were released. At this point Nvidia can only tweak clock speeds, or cut down the cores, they can't and haven't gone back to the drawing board with it. FERMI is set to run as fast as it was specced Q4'09, or slower.

Remember, FERMI is causing Nvidia problems. This points to two possible eventualities, either FERMI is very fast but very expensive and supply is very limited, or we will see a cut down version.

All true, of course. Bear in mind I wrote that little piece before the 5970 was released and before numbers were even available. It scaled just a tiny bit better than I was expecting, but still only beat the GTX 295 by 40% (IIRC) at 2560x1600, and about 30% at 1920x1200. If Fermi makes 512 SPs, that's 113% more shaders than a GTX 285, and we already know a pair of those in SLI beat a 295. It's still 7% more shaders than a GTX 295, and there *should* be no dual-gpu penalty attached. The conclusion I drew (and still draw) from all that?

If Fermi makes clocks, it'll come close to, or beat the 5970. If it doesn't, it's just gonna be a boring 295-on-a-card. I didn't base my conclusion on anything other than extrapolation on current generation peformance with a sprinkling of speculation. I've never claimed anything more.
 
All true, of course. Bear in mind I wrote that little piece before the 5970 was released and before numbers were even available. It scaled just a tiny bit better than I was expecting, but still only beat the GTX 295 by 40% (IIRC) at 2560x1600, and about 30% at 1920x1200. If Fermi makes 512 SPs, that's 113% more shaders than a GTX 285, and we already know a pair of those in SLI beat a 295. It's still 7% more shaders than a GTX 295, and there *should* be no dual-gpu penalty attached. The conclusion I drew (and still draw) from all that?

If Fermi makes clocks, it'll come close to, or beat the 5970. If it doesn't, it's just gonna be a boring 295-on-a-card. I didn't base my conclusion on anything other than extrapolation on current generation peformance with a sprinkling of speculation. I've never claimed anything more.

I see your point, but there seems to be some inconsistencies with past trends. We all want great performance parts, but maybe you're just jumping the gun a bit?

Nvidia will be aiming to double performance of the original GTX280, not the GTX285. With this, also bear in mind the benchmarks of the 280 vs the 9800 GX2 - link. Just as SLI and crossfire do not lead to linear performance gains, the same also applies when increasing the number of shader cores.

I idea of a GTX480 beating a 5970 is just a bit optimistic. I wouldn't be surprised if the GTX480 ends up a little slower but noticeably cheaper than the 5970.

It will be interesting to see how production of the GTX400s is affected also, as Nvidia is dependent on TMSC as well.
 
I see your point, but there seems to be some inconsistencies with past trends. We all want great performance parts, but maybe you're just jumping the gun a bit?

Nvidia will be aiming to double performance of the original GTX280, not the GTX285. With this, also bear in mind the benchmarks of the 280 vs the 9800 GX2 - link. Just as SLI and crossfire do not lead to linear performance gains, the same also applies when increasing the number of shader cores.

I idea of a GTX480 beating a 5970 is just a bit optimistic. I wouldn't be surprised if the GTX480 ends up a little slower but noticeably cheaper than the 5970.

It will be interesting to see how production of the GTX400s is affected also, as Nvidia is dependent on TMSC as well.

In the case of the GX2 vs the 280, there's a few things to bear in mind. The GX2 has 256 SPs compared to 240 for the 280. The 280 also has no performance quirks related to being SLI on a card, which is the reason I even bothered upgrading from the GX2. WoW microstutter (and regular stutter even) went away completely. Those benches were also launch drivers, 177.34, and the whole GTX series has seen vast improvements since then, while GX2 has basically stood still.
 
In the case of the GX2 vs the 280, there's a few things to bear in mind. The GX2 has 256 SPs compared to 240 for the 280. The 280 also has no performance quirks related to being SLI on a card, which is the reason I even bothered upgrading from the GX2. WoW microstutter (and regular stutter even) went away completely. Those benches were also launch drivers, 177.34, and the whole GTX series has seen vast improvements since then, while GX2 has basically stood still.

How much performance are you talking here, I don't disagree with you but AFAIK the GTX280 didn't really pull ahead of the GX2, the GX2 just reached EOL and was outdated in other areas, other than performance.
 
How much performance are you talking here, I don't disagree with you but AFAIK the GTX280 didn't really pull ahead of the GX2, the GX2 just reached EOL and was outdated in other areas, other than performance.

I'm just going by numbers I have seen in patch notes over time. 177 to 192+ series drivers have had countless little "up to 18% improved performance in (whatever game)" sprinkled around. Nothing I can quantify, of course, but gains nonetheless. If I get bored, I'll install those 177's and do some benches, then switch back to the most recent and run them again, just to see. Getting a little off topic here, though :beer:

edit: You know, I'm going to do it now with Crysis and Far Cry 2. I'll start a thread for it.
 
it could or it couldnt, i just found it on the net, i dont know how truthfull the sight or author is
 
I think the box is real, but it may have been made-up to get some exposure to that no-name company. I've never heard of the "Colorful" name before in the gfx card arena.
 
I need to see how that thing does F@H.

Apparently Nvidia completely re-did the geometry shader engine and un-cored the clocks (not 100% sure). They said geometry performance would be 5-10x higher than GT200, and that is sayin something for molecular dynamics.
 
Should be good for fah, that's kind of the point of fermi after all. It's not madness, it's SCIENCE!
 
Who do you think is going to be doing all this science in 20 years?


A generation of gamers will transcend into scientists and scholars. The only thing science will be is simulation and computation soon as physical experiments become either impossible or have been done already or unsolvable in finite time.

Can't wait.
 
But besides the science this gpu is supposed to be able to do, I want more info on power consumption and thermals. Since I pay the bills for my electricity and I also have to keep my gpu cool while running CUDA work, I want to know whether this gpu will be excessive on either account. If it's a power gulping nuclear furnace then I won't touch it with a 10 foot pole and the science will just have to do without me jumping on the Fermi bandwagon and deal with my last gen 260GTX.

I really can't wait until some benches on this are released. But I wonder if the lack of leaked benches on this gpu are portending to problems something like the ATI R600 gpu saw when it was being developed. Late, hot, difficult to manufacture and relatively poor poor performance compared to the hype that was initially released. I guess the future will tell all. ;)
 
Back