• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FRONTPAGE AMD Radeon HD 7970 Graphics Card Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Ivy, he was asking about dry ice or liquid nitrogen cooling on it... He wants to know what sort of numbers it can put up if its super cooled.
 
Yeah i noticed myself short time after i was thinking about, i kinda did mistaken it with that other ability, but then i had in mind.. why is he asking for performance, since it is no matter for that technology.. and then... "ah... it must be dry ice" (and im no extreme OCer so its not very confident to me ;) ). Well i simply explained both stuff, some people might be interested and are not fully sure about.
 
Hmm, the CPU scaling seems to be close to none in many cases, for games that is. I think even as a super enthusiast i wouldnt switch from Nehalem to SB because its just not worth it, not when you mainly need the performance for games and no other special stuff. For most media center functions, even a baby CPU still runs fine with, and a GPU is aswell helping with decoding, its simply not a matter anymore. Pointing at PCIE 3.0 GEN, its aswell close to no effect by now, no matter what CPU is used, that im pretty sure is true. So, it looks good on the paper but thats it. Finally the most important stuff is still to have a powerful chip architecture and good software, thats truly like 90% of the cookie.

Looking at his great tests, CPU Nehalem flagship vs. SB flagship, in most cases only 20% difference, and for games close to no difference at all. http://www.hardwaresecrets.com/article/Core-i7-3960X-Extreme-Edition-CPU-Review/1429/16

Surely, for updating a PC, a single generation is not even worth to look at. Anyway, in usual i only update stuff every 4 years, however, im now in the worse situation that my old PC died and now i have no replacement anymore and no "Test-PC" which is important to me in term there is issues. So i guess i simply wait for Ivy bridge and will put a 7000 series Radeon inside at that time. Having 2 working PCs is critical to me, else as soon as i get a issue i cant test and could be stuck with a unstable system and no way to pin the issue down in a effective way. Thats everyones true horror, and thats why a backup is such a big gain. On the other hand, giving the PC to a PC tech, can cost several houndred dollar in my country, with that cash i can aswell buy a new Radeon GPU, and im good enough to handle the stuff myself, in term i have a second PC. ;) Besides, my friend/family they all got either a Laptop or a Mac Mini, so i cant test on theyr systems. ;) Im the only "strong hardware" user from my whole family. Not that i am "conventional" i dont even know that word... i deal with SFF or i go build a super high end tower someday, but im truly not interested into "standart", its just not interesting for me and i like to study and making impossible stuff possible... thats why we are OCers or "impossible stuff builder"... at least many of us.

To come back to the main topic, yes, the 7000 series surely is aswell... something every OCer should truly feel very eager to get theyr hands on and im not different and see it as a new opportunity for a new system.
 
Last edited:
Looks like a monster card... cheers to AMD for making it interesting in the GPU segement.

ANd as usual, a top notch review and in record time. GJ Hokie!
 
I like it although i am not a reference card style fan so i think i will stick with the MSI 6970 i want until i can see a better priced non-reference card available which i know will not be till some time late next year.

Actually, from what I heard, AMD sent the chips to Asus, MSI, and etc early and told them they didnt have to launch with the reference design, that they could pretty much do what they wanted.

Now wether or not they decide to do this, is up to them, but I dont think you will that many reference designs this release.
 
Will the cards be available on the 9th or will I be able to pre-order on the 9th?

Or could we have the possibility of pre-ordering the cards before the 9th? Maybe even this year?
 
I am wondering if anyone has a pair of them to test scaling in xfire. would be very interesting to see what 2 of these brutes would do in some competition benchmarks
 
Actually, from what I heard, AMD sent the chips to Asus, MSI, and etc early and told them they didnt have to launch with the reference design, that they could pretty much do what they wanted.

Now wether or not they decide to do this, is up to them, but I dont think you will that many reference designs this release.
I think you will see the opposite of this actually...

Normally reference designs hit the store shelves first anyway. Second I think, at least for this launch, quite the opposite happened. Im not sure board partners got much more if any (I also heard one company didnt receive the reference cards until AFTER the select group of reviewers got them) lead time this go around.

AS far as LATE next year, I would imagine we will see these come out shortly after release. "Late next year" means 3Q to me and thats a but silly to think that is when the first non reference boards will come out. I would expect them in February to March personally.
 
I am wondering if anyone has a pair of them to test scaling in xfire. would be very interesting to see what 2 of these brutes would do in some competition benchmarks

That will yield some pretty hefty benchmarks I'm sure, AMD has really invested a lot of effort in maximizing CF performance from the 6800 series onward, much more than their green competitors with SLI
 
Hardwareheaven did crossfire tests, and it crushed stuff. Good read, and they also have pretty artwork, though it looks like their ad sales department is owned by AMD lol:
http://www.hardwareheaven.com/revie...rossfire-performance-review-introduction.html

We also have more than one at this point if I'm not mistaken, so we may be doing our own results if we can get them in the same location.

As for PCIe2.0 vs PCIe3.0 - stay tuned for further results in those departments. Some reviews are reporting no benefits, others are reporting benefits - it depends on how its tested, and possibly also on configuration of the test system. Some are saying benefit is only seen in GPU compute situations which are bandwidth heavy, others are saying there was no difference in their tests.
 
I think they are a little AMD bias'd. They pride themselves on "bench marking practice" yet they turned on phyx for the Batman testing. What better way to kill framerates on the nvidia side.
 
I think they are a little AMD bias'd. They pride themselves on "bench marking practice" yet they turned on phyx for the Batman testing. What better way to kill framerates on the nvidia side.

Wow, I didn't look at that part. That is pretty bad testing methodology - if they wanted to show those results, they should have also presented the results without physx.
 
I think they are a little AMD bias'd. They pride themselves on "bench marking practice" yet they turned on phyx for the Batman testing. What better way to kill framerates on the nvidia side.

:D, anyway, the point they make.. do you need PhysX nowadays when Intel is delivering us CPUs with Alienpower? Finally the result counts and if Nvidia want to crush themself by overusing theyr GPU... well yeah, i guess they just wanted to make a point. My view is, the GPU on next gen graphics got so much rendering to handle, it truly would be happy to hand over the physics job to a CPU, and they truly can do that. Its just not true when Nvidia is telling us that only a GPU can be doing it, i dont believe it. For games a CPU usualy is handling AI and physics, thats what they are here for, in theory. Many years ago (C2D and older) CPUs surely had much higher impact on gaming performance because they had more work to do, nowadays thats pretty much GPU limited (especially on high res).
 
Last edited:
:D, anyway, the point they make.. do you need PhysX nowadays when Intel is delivering us CPUs with Alienpower? Finally the result counts and if Nvidia want to crush themself by overusing theyr GPU... well yeah, i guess they just wanted to make a point. My view is, the GPU on next gen graphics got so much rendering to handle, it truly would be happy to hand over the physics job to a CPU, and they truly can do that. Its just not true when Nvidia is telling us that only a GPU can be doing it, i dont believe it. For games a CPU usualy is handling AI and physics, thats what they are here for, in theory. Many years ago (C2D and older) CPUs surely had much higher impact on gaming performance because they had more work to do, nowadays thats pretty much GPU limited (especially on high res).

an nvidia GPU with physX is way better than a cpu at physics type work. have you played a game that supports PhysX on an nVidia card? it totally blows away anything the cpu can bring. yes it is a load on the gpu but it is also something that AMD does not even offer so having it on while comparing it to and AMD gpu is not fair and IMO takes away credibility for doing so. because if they did that there, where else did they give the card an unfair advantage over sli gtx 580's


(this is in regards to hardware secrets and is in no way implying anything at all towards the overclockers.com review of the 7970)
 
Havoc should be able to do the same thing, if the CPU goes down the roof it can aswell be used on Havoc, although im truly no expert at those stuff.

They truly should make something non proprietary and it took a long time till Nvidia only partially gave away theyr proprietary rights to own any single piece of it and therefore make devs unable to implement anything like that on Radeon, as far as i know. However, the statement of AMD/ATI was always pretty clear, that they dont feel the urge need of something like that. So they usualy gave pretty low support to devs tryring to implement something. I remember, Nvidia had aswell other software which is licensed and when MS was moving from a Nvidia to a ATI GPU on Xbox, they had to pay license fee for even being able to run that kind of software on a Radeon. I mean, Nvidia truly was mean with stuff like this, im glad they finally tried to open the matters a bit, giving devs more access to implement stuff on Radeon.

I aswell dunno what you mean by "blow away" in particullar, you might tell me, im interested. At a game like Shogun Wars with a pretty high physics need (insane amount of units), a CPU still can successfully run it. Now what kind of phyics do you have in mind which is unable to be run on any CPU?

Finally i do want the better hardware to win, not the one being a wonder of software implementations. Besides, devs are always free to create open source for Radeon, but apparently rarely anyone ever going to do it and dunno why. Maybe because the devs nowadays just want to have everything shoven inside theyr mouth, already warmed up and served on a silver plate, seriously, where is the true dev skill? Are we nowadays totaly ruled and bound by the harsh rules of money and economics? So we aswell start to move away more and more from PCs because consoles simply are more bucks? And so we lose a lot of quality because many of those dont even try to tune a game for good PC hardware anymore..? Well yeah.. who knows.

Its surely good that Nvidia is trying to drag devs toward theyr side but they truly have to work together with devs and aswell with AMDs Radeon somehow, not only trying to boost themself. Even as a competitor working together in many aspects brings more power regarding software tuning.
 
Last edited:
Havoc should be able to do the same thing, if the CPU goes down the roof it can aswell be used on Havoc, although im truly no expert at those stuff.

They truly should make something non proprietary and it took a long time till Nvidia only partially gave away theyr proprietary rights to own any single piece of it and therefore make devs unable to implement anything like that on Radeon, as far as i know. However, the statement of AMD/ATI was always pretty clear, that they dont feel the urge need of something like that. So they usualy gave pretty low support to devs tryring to implement something. I remember, Nvidia had aswell other software which is licensed and when MS was moving from a Nvidia to a ATI GPU on Xbox, they had to pay license fee for even being able to run that kind of software on a Radeon. I mean, Nvidia truly was mean with stuff like this, im glad they finally tried to open the matters a bit, giving devs more access to implement stuff on Radeon.

I aswell dunno what you mean by "blow away" in particullar, you might tell me, im interested. At a game like Shogun Wars with a pretty high physics need (insane amount of units), a CPU still can successfully run it. Now what kind of phyics do you have in mind which is unable to be run on any CPU?

This will be my last post that has anything to do with this subject so as not to derail the thread to much :chair:

Mafia II the physics (glass breaking, car blowing up, clothes blowing, pretty much everything) in that game with PhysX enabled was IMO so much more realistic than with just the cpu handling physics. I have 2 GTX 580's and only run 1 1920x1080 monitor so fps was not an issue with PhysX enabled so I tried it with PhysX enabled and with just cpu. GPU's blow away any cpu in these type of situations.
 
It surely is a interesting topic but maybe we can continue somewhere else, since we basically are focussed on 7000 series stuff now ;). I surely wouldnt generally say that a CPU cant handle, but there are exceptions, there is no rule without exception.
 
Havoc should be able to do the same thing, if the CPU goes down the roof it can aswell be used on Havoc, although im truly no expert at those stuff.

They truly should make something non proprietary and it took a long time till Nvidia only partially gave away theyr proprietary rights to own any single piece of it and therefore make devs unable to implement anything like that on Radeon, as far as i know. However, the statement of AMD/ATI was always pretty clear, that they dont feel the urge need of something like that. So they usualy gave pretty low support to devs tryring to implement something. I remember, Nvidia had aswell other software which is licensed and when MS was moving from a Nvidia to a ATI GPU on Xbox, they had to pay license fee for even being able to run that kind of software on a Radeon. I mean, Nvidia truly was mean with stuff like this, im glad they finally tried to open the matters a bit, giving devs more access to implement stuff on Radeon.

I aswell dunno what you mean by "blow away" in particullar, you might tell me, im interested. At a game like Shogun Wars with a pretty high physics need (insane amount of units), a CPU still can successfully run it. Now what kind of phyics do you have in mind which is unable to be run on any CPU?

Finally i do want the better hardware to win, not the one being a wonder of software implementations. Besides, devs are always free to create open source for Radeon, but apparently rarely anyone ever going to do it and dunno why. Maybe because the devs nowadays just want to have everything shoven inside theyr mouth, already warmed up and served on a silver plate, seriously, where is the true dev skill? Are we nowadays totaly ruled and bound by the harsh rules of money and economics? So we aswell start to move away more and more from PCs because consoles simply are more bucks? And so we lose a lot of quality because many of those dont even try to tune a game for good PC hardware anymore..? Well yeah.. who knows.

Its surely good that Nvidia is trying to drag devs toward theyr side but they truly have to work together with devs and aswell with AMDs Radeon somehow, not only trying to boost themself. Even as a competitor working together in many aspects brings more power regarding software tuning.


That was un-called-for, don't you think? -hokie



I can't wait to see some tri/quad-fire benchmarks.
 
Last edited by a moderator:
Back