• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FRONTPAGE AMD Radeon HD 7970 Graphics Card Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Mafia II the physics (glass breaking, car blowing up, clothes blowing, pretty much everything) in that game with PhysX enabled was IMO so much more realistic than with just the cpu handling physics. I have 2 GTX 580's and only run 1 1920x1080 monitor so fps was not an issue with PhysX enabled so I tried it with PhysX enabled and with just cpu. GPU's blow away any cpu in these type of situations.

I agree 100%, PhysX when used to its full potential is quite amazing. The only reason it is not used to it full capacity is simply the fact that there is no install base. Therefore developers see no benefit of adding the extra effort to implement it in any meaningful way.

Look at Cryostasis; imo one of the best survival horror games released this generation. Amazing tech flawless physx implementation however its sales were poor. That is all that really matters to developers.
 
Why do you think that i say that we finally should get ride of proprietary stuff and try to set a certain "shared standart". Btw: Agreement isnt issue, respect is.
 
hardwareheaven (not hardwaresecrets) said they ran Batman with physics on because everyone with a nvidia card would run the game with physics on, so it was more of a real life result.

I disagree with their approach and presentation, however they did make it clear how the test was run and that is important.
 
Hoki, I just wanted to drop in and say thanks for thinking about fah again!! I know you didn't have time to run it in the short lead but you thought about it, and I really think if we become known for always throwing in some fah benches on hardware we might just get a few extra page views. So here is to looking forward to the results (and hoping AMD is catching up to nvidia in fah!)
 
The thing is, turning physx off doesn't show you the max the CPU can do, it shows you the max Nvidia allowed the game designer to run on the CPU.
A modern top end CPU can run a far better physics model than they get credit for, because nvidia seems to demand that all the "physx" type eye candy be disabled if physx is turned off, even the stuff that could run happily on the CPU.
 
Hoki, I just wanted to drop in and say thanks for thinking about fah again!! I know you didn't have time to run it in the short lead but you thought about it, and I really think if we become known for always throwing in some fah benches on hardware we might just get a few extra page views. So here is to looking forward to the results (and hoping AMD is catching up to nvidia in fah!)

Just sorry I couldn't run it in time. I haven't run a gpu client for years since an 8800GTX. If there are any tips or tricks, definitely pm them. I'll try to get some numbers in the coming week for you.
 
I know, I know....I'm slow :p

Good read Jeremy, nice work taking down my scores....those were leading the team for far too long :thup:
 
The thing is, turning physx off doesn't show you the max the CPU can do, it shows you the max Nvidia allowed the game designer to run on the CPU.
A modern top end CPU can run a far better physics model than they get credit for, because nvidia seems to demand that all the "physx" type eye candy be disabled if physx is turned off, even the stuff that could run happily on the CPU.

Thank you!
I mean, a CPU could do more than that but Nvidia kinda will disable it on purpose. They are telling us that actually as good as any PhysX is impossible on CPU, and we could get a 1 million core supercomputer and they will still say the same. People should stay tuned for future CPUs, the Ivy bridge with new transistors and what else... CPUs arnt sleeping. Its however true that on a few games the physX still may own, but its truly just a hand full and for the majority barely worth it, especially when they have no interest in such games (i am RPG gamer so i got little interest into Shooter). Aswell i am not sure how effective the PhysX programming truly is executed, but one thing im sure about: CPU can do more than what they currently do, thats why CPU have almost no impact on games anymore. Only theyr clocks truly matters, architecture is almost wasted, means that most parts of CPU isnt used. Im sure that Intel will build some true monster CPUs while AMD will build a monster Radeon i guess... it kinda looks like. Strong CPUs are currently underused for most games... a overkill to even own them. Any other program will get more gain than a game.

Im not truly a fan of whatelse but at the current condition i rather support AMDs view, because CPU/GPU was always a team in the past and Nvidia slowly is trying to break a highly efficient and powerful team in order to make theyr GPU look superior. But no matter what, they have to work together for a shared standart which can easely be implemented by the devs and which does use CPUs more effective, its not a useless part.

Performance wise in scientific manner, a current SB flagships can handle about 120 GFLOP in double precision, a 7970 will handle around 950 GFLOP in double precision (the current stongest single). However, GPUs used for PhysX arnt usualy high end, and if we try to run it on a single GPU, then the GPU will break down by ~15-30%**, because thats the load the CPU is able to take away. (**Much higher than 15% when GPU is weaker than current Radeon flagship). I get the feeling that it will increase in near future, because its Intel. As long as physics can be run on roughly 100 GFLOPs then it may run without GPU, hard logics and the GPU got more power for rendering. But even more powerful is to share the physics so the CPU is fully utilized and the rest can still be taken over by the GPU, there simply will be a slider which we can adjust how much physics load we want to hand over to CPU, 1 to 100% (in term the CPU is overused it will simply destroy performance, kinda same such as on a overused GPU). Ofc the CPU still got the advantage to be the "jack of all trades" while a GPU is always very hard to adapt to something else, so main focus have to be on how to get the 2 GPUs (Radeon/Geforce) to a shared standart on such terms. Master software is still not on this planet, its somewhere on a unknown planet.

Why a CPU isnt stronger than that? Huge amount of transistors are used for cache (billions!). But we soon come to a point where we dont need so many of it anymore and can instead increase raw computing performance with. But well, a 8 core Ivy Bridge having 3D transistors, i wonder its computing performance. Prehaps 150-200 GFLOP. Some might get dual CPU board, 300-400 GFLOP? Especially for such people a engine which can hand over dutys to CPU is critical. Who knows, but its not weak and its much more adaptive to whatelse. But they will take theyr time and slowly walk up the ladder, and why? Because they can, no competition.

Because its going to much into OT (not truly topic related because to much CPU/gaming talk) i will continue the specific terms on: http://www.overclockers.com/forums/showthread.php?p=7056694#post7056694
 
Last edited:
Awesome. Look at those memory clocks too. Over the next-highest result, that is a 400 MHz core clock improvement over a GTX580 (1180 vs. the next-hightest 580 at 1560). These things are DX11 beasts!
 
To sad phils processor didnt clock higher, else it would be completly devastating on DX11. 1610 memory clock, ok not world record but extremely high and the 7970 with its new 384 bit bus can make big use of it. I mean that thing is on stock cooling, the processor/memory can still go up!

Short time after release, only a few 7970 ever tested yet, and the superclocked 7970 beats it by a large margin, im impressed. Also, the Radeon seems to work on quad fire it seems? The highest is currently 6970 on quad. However, that will be beaten soon.

Although the Unigine Heaven is a pretty demanding engine which makes high use of the tesselation. But i still didnt expect to have the 580 GTX beating that soon at those superclocked ones. We still only had a bundle of 7970 tested ever.

Quote:
No more competition with HD 7970... the GTX 580 era finished...
 
Last edited:
Über Sad Face

I would love to own a card like this, money isnt the issue, i can save up for it np. However, i would need to remake my entire computer to handle the dang thing. Stats: Phenom II 1055T 2.8GHz X6
ASRock N68 VS3 UCC MOBO
AMD HD 6670 1GB Graphics Card
2x 4GB DDR3 1333MHz RAM
450W Power Supply
Nvidia Geforce 7025/630a Chipset

Yea, im good on RAM, maybe i need some with better MHz? Need a better case. and power supply. PS!!! Hint: its alot harder to switch out parts in a gaming comp as to just building your own from case up. :bang head
 
If you're serious that money isn't an issue, you might want to just sell the whole prebuilt unit as a whole and do just that, building from the case up. :thup:
 
To sad phils processor didnt clock higher, else it would be completly devastating on DX11. 1610 memory clock, ok not world record but extremely high and the 7970 with its new 384 bit bus can make big use of it. I mean that thing is on stock cooling, the processor/memory can still go up!

Short time after release, only a few 7970 ever tested yet, and the superclocked 7970 beats it by a large margin, im impressed. Also, the Radeon seems to work on quad fire it seems? The highest is currently 6970 on quad. However, that will be beaten soon.

Although the Unigine Heaven is a pretty demanding engine which makes high use of the tesselation. But i still didnt expect to have the 580 GTX beating that soon at those superclocked ones. We still only had a bundle of 7970 tested ever.

Quote:
No more competition with HD 7970... the GTX 580 era finished...

To be fair, when people with ATI cards run Heaven they disable Tesselation in their drivers to get a much higher score.
 
To be fair, when people with ATI cards run Heaven they disable Tesselation in their drivers to get a much higher score.
:thup:

Absolutely. Thats a HUGE difference since you can adjust the level of tessellation in the benchmark.
 
To be fair, when people with ATI cards run Heaven they disable Tesselation in their drivers to get a much higher score.
I dunno what is it you wanted to tell me?

After all they got a few pride left, really really + really!

The 7000 series indeed handles a hard punch on tesselation, they are strong at that. The 6000 series however surely will go down to the kneel at some point. The Radeon 7000 isnt tuned for pointless performance ratings, because its using a architecture which does handle demanding setting much better, leaving the most powerful parts unused when that stuff isnt running. Sure, rating still goes up but not barely as much such as on Geforce card, so its not actually a advantage. On some of those performance ratings they may get several thousand FPS in some cases and then they gonna tell me that theyr CPU is limiting, i mean, yes, thats indeed sad.

Whatever.
 
Last edited:
I dunno what is it you wanted to tell me?

After all they got a few pride left, really really + really!

The 7000 series indeed handles a hard punch on tesselation, they are strong at that. The 6000 series however surely will go down to the kneel at some point. The Radeon 7000 isnt tuned for pointless performance ratings, because its using a architecture which does handle demanding setting much better, leaving the most powerful parts unused when that stuff isnt running. Sure, rating still goes up but not barely as much such as on Geforce card, so its not actually a advantage. On some of those performance ratings they may get several thousand FPS in some cases and then they gonna tell me that theyr CPU is limiting, i mean, yes, thats indeed sad.

Whatever.

You mentioned that the card did well with the Tesselation of Heaven. What I'm telling you is that when people benchmark Heaven with an ATI card they turn off Tesselation in their drivers (nVidia doesn't have this 'feature') thus giving a much higher score on ATI cards since it doesn't have to render the Tesselation of the benchmark, but nVidia does.
 
To be clear, Janus is talking about competetive benchmarking. When I run Heaven in a review, CCC is set at its default settings, with no artificial Tesselation manipulation.
 
I know what he mean, i did understand it from the beginning but its cheating, thats how i call it. I said that im unable to understand because i didnt actually expect such benchers using dirty tricks and i simply leave it by that. But finally the correct stuff is to leave it at default, means it should use it at the level such as Nvidia does. Its not a bad option to be able to make such finetuning (immense gain for real situation, when you want to get the most out of it) but sadly it does allow for unfair behaviours.

However, with or without tesselation, the 7000 series will clearly beat the 580 GTX in long term, because neither the games nor the drivers are fully optimized yet, the hardware is simply to much of a child at the current date. It already is beating by 20% (overall) at real situations, but the distance will be even higher and surely no "cheating" needed, there is no need for it, its tesselation is superior to the 580 GTX. Even allowing so much finetuning is a clear sign of its superiority. AMD/ATI was always the forerunner of tesselation. Just with the Fermi architecture Nvidia truly was pumping massive effort into it and did even stomp down the 6000 series on raw performance. However, the 7000 series will get the throne back and continuing where AMD/ATI started on its very tesselation capable consumer cards.
 
Last edited:
Back