AMD Radeon HD 7970 Graphics Card Review

Welcome to another AMD-releases-new-architecture review. Hopefully this one will go a bit better than the last one. Bulldozer was a disappointment to many. While I still hold it’s a solid CPU for the price point AMD set (but not current over-inflated retail prices), it definitely wasn’t the jump over Thuban everyone wanted it to be.

Enter Southern Islands and Graphics Core Next (GCN), a fundamental change to AMD’s approach to graphics processing. Will it struggle like AMD’s last fundamental change or will it rise above and be crowned the new king of graphics cores? Let’s find out, shall we?

Specifications, Features, Cooling & Power Consumption

This will be a bit shorter than you’re used to. We have the full info going over the architecture in detail and I’d love nothing more than to pontificate on its virtues, but unfortunately we’ve had very little lead time for this review. Being right before Christmas and, you know, having a family doesn’t lead to excessively detailed reviews when you only have a piece of hardware and info for six days (including one to type the review).

Getting right to it, there are going to be three iterations of the Southern Islands product line. All of them use GCN. The GPU we’ll be looking at today is code-named “Tahiti”, the highest-end Southern Islands part. The two to come in the future are “Pitcairn” and “Cape Verde”.

Southern Islands Line

Southern Islands Line

Graphics Core Next, or GCN is AMD’s latest and greatest GPU architecture. It is rated as PCI-e Gen3 (which comes with its own set of issues as you’ll see later), is produced on a 28 nm process and has advanced power management. For a superb comparison on how AMD’s GCN compares to its former VLIW4 architecture that, frankly, I just didn’t have time to type in the time between getting this info and now, check out this piece at Anandtech.

Eyefinity 2.0 comes with a new monitor orientation option that’s not necessarily practical but interesting to have (how many people have room for 5 x 1 landscape monitors?). New bezel compensation is coming down the road – ETA Feb. 2012, they will support bezel compensation across different-sized monitors.

They’re also introducing Eyefinity with HD3D. You’ll probably need some insane GPU power to produce 3D at very high resolutions, but they’re laying the foundation for the future.

They are also pushing the computing ability of this GPU, which should be quite a bit stronger than their former VLIW4 Cayman architecture. With such a short lead time, we’ve chosen to focus on how this thing will bench and run games, but down the road we’ll look at how it computes, especially with Folding@Home.

Southern Islands Features

Southern Islands Features

Here we have quick specs and features overview. The HD 7970 has 2048 stream processors. AMD calculates this GPU as having over 3.5 TeraFlops of computing power, which eclipses its 6970 predecessor. It also comes with 3 GB of 384-bit GDDR5 RAM – and the ability to use up to six monitors, which you’ll need to take advantage of all that video RAM.

For features, it comes native with four outputs – one DVI, one HDMI and two mini-DisplayPort. Yes, that’s only four outputs, but AMD is partnering with manufacturers and will be coming out with a splitter that plugs into one of those mini-DP outputs and will allow you full use of your six-monitor potential.

Like the HD 6970 before it, the HD 7970 also has a dual BIOS switch, with one protected stock BIOS and another for you to flash as you see fit. I was on pins and needles when I flashed a GTX 580 to get some more voltage control because there was no coming back if I messed it up. This makes that process worry-free. Now all we need is something to control voltage, which doesn’t exist yet!

The cooler is reported as having an improved fan design – larger, which will produce higher CFM and be quieter. Larger? Check. Higher CFM? Probably. Quieter? Um…it’s a squirrel cage fan in a GPU. They can only get so quiet. While CCC does keep the fan under control and you can barely hear it when benchmarking and gaming (wait until that voltage control I mentioned…), when you crank it up you’ll know it’s there. So will your family and your dog if they’re in the same room.

HD 7970 Quick Specs

HD 7970 Quick Specs

7970 Features Overview

7970 Features Overview

The heatsink definitely does a good job though, even when running quietly under CCC control. As with previous reviews, for temperature testing the fan was set at 75%, which is moderately loud, but not enough to drive you out of the room; it’s reasonable for gaming use. Temperatures were measured and normalized to 25 °C ambient temperature.

GPU: 6970 6990 X580 7970
Temp Idle: 33 °C 38 °C 29 °C 31 °C
Temp Load: 51 °C 69 °C 57 °C 61 °C

It’s not the coolest GPU on the block, but does manage to be close to the Sparkle X580, which has an aftermarket Arctic cooler affixed from the factory. While it’s not as cool running as the HD 6970, it’s also much more powerful, so you’re sacrificing some degrees in exchange for performance, which is an acceptable trade-off for most overclockers.

As far as fan noise, it’s reasonable. Squirrel cage fans are always going to be loud when you ramp them up to four thousand or so RPM. I’ll never understand the people that always complain about loud squirrel cage fans on GPUs. Aren’t you used to it yet? Do you think physics is going to change all-of-a-sudden?

The thing is, when running games and benchmarks even overclocked the fan barely had to spin up to keep temps reasonable. So while it is definitely loud if you manually set it to 75 or 100%, if you let CCC control it, you might hear it but it won’t be anywhere near loud.

AMD introduced PowerTune with the HD 6900 series, where you can give up to 20% more power than stock to give your GPU some breathing room without actually considering it ‘voltage’ control. You can also turn it down 20% if you feel the urge.

PowerTune served us well for this GPU, for without it there would have been no added current capacity at all. Hopefully MSI (Afterburner) or Sapphire (TRIXX) will release updated versions of their software soon so we can show you what this thing can do with a little extra juice.

Power Tune

Power Tune

From sipping power at idle to loaded power efficiency, AMD is pushing power consumption heavily. Their GPUs have progressively dropped off to using almost zero power at idle. Now, this GPU will remain in the ‘normal’ range when it engages your screensaver (mine is ‘blank’) but will slip into ‘Long Idle’ when it powers the monitor down, reducing your GPU’s power sipping to a mere 3 W.

Zero Core Tech

Zero Core Tech

Lower Long Idle Power

Lower Long Idle Power

Multi-GPU Zero Core

Multi-GPU Zero Core

So, what does that translate to in real-world use? Idle wattage to make even the greenest gamer proud. These numbers are total system wattage as read via a Kill-a-Watt from the wall as we don’t have the capability of measuring the GPU wattage directly.

GPU: 6970 X580 6990 7970
Idle: 97 W 118 W 112 W 89 W
GPU Load: 315 W 443 W 496 W 352 W
GPU & CPU Load 371 W 496 W 557 W 408 W

When the GPU drops into Long Idle, the HD 7970′s idle wattage goes down ~6W, from 89 W to 83 W. That isn’t reflected in the chart because I didn’t test the other cards there to see if they had any improvement when they shut the monitor off. What’s even better about this Long Idle feature is when using CrossFireX. Until they are needed, your extra GPUs will stay powered down in the Long Idle position, sipping a mere 3W each. So until you engage a load that needs them you can have three more HD 7970s in your box pulling only nine additional watts. That’s quite impressive.

When the GPU is loaded it comes in above the HD 6970, but almost a full 100 W below the GTX 580, which isn’t too shabby at all. Considering the performance, this is one efficient GPU. 28 nm shrouded in strong power management looks good on AMD.

AMD is also practically begging you to overclock this GPU (though it’s still not covered under warranty of course). Stock speed on these is 925 MHz. They seem to think 1 GHz is not going to be too hard. We’ll find out soon enough – the new version of CCC goes up to 1125 MHz, if the GPU can make it that far.

Overclocking Headroom

Overclocking Headroom

The comparison between the HD 6970 and HD 7970 is not really much of a comparison. The HD 6970 wasn’t top dog when it came out and did not beat the already-king-of-the-hill GTX 580. It is a strong GPU and priced perfectly for its target market. It also scales very well in CrossFireX, easily reaching 85-90% improvement over a single card. It was not, however, the best single GPU. That title resided with NVIDIA. We’ll see later whether AMD has stolen the champion belt.

The specifications have been improved significantly. The slide before said the GPU has been calculated as having over 3.5 TeraFlops of computing power and this one shows just what ‘over’ means, stating 3.79 TFLOPS. The HD 7970 comes with the same number of ROPS, 32 more texture units and 512 additional stream processors.

6970 vs. 7970

6970 vs. 7970

It clocks in at 925 MHz core speed, 45 MHz more than its predecessor. Memory clock speed remains the same, but it has more bandwidth available due to the 384-bit wide memory interface (over the previous generation’s 256-bit).

Stock GPUz

Stock GPUz

As mentioned, it is produced on a 28 nm process and weighs in at a very hefty 4.31 billion transistors. We’ll have to assume that’s accurate and it’s not edited down by, oh, about 0.8 billion or so like Bulldozer. {Cough.} It also does all of this with a 250 W power rating identical to the HD 6970, which is impressive in itself and is due in no small part to said 28 mm process.

The Card

AMD has updated the look of their GPUs again and these look even better than last generation. They’re glossy red and black, so it’s a minor pain to photograph, but the look is quite stunning. See for yourself!

AMD HD 7970

AMD HD 7970

AMD HD 7970

AMD HD 7970

AMD HD 7970

AMD HD 7970

AMD HD 7970

AMD HD 7970

AMD HD 7970

AMD HD 7970

By now you know once it starts taking photos, my camera doesn’t like to stop. If you want to see a bunch more photos, feel free to click through.

The PCB is black (contrary to some leaked photos I’ve seen out there with red PCB), cleanly manufactured with neat soldering throughout.

Back of 7970 PCB

Back of 7970 PCB

Along with the identical 250 W TDP comes identical power connectors – one 8-pin and one 6-pin PCI-e connectors. On the right you can see the dual BIOS switch. It’s small and slightly recessed so the chances of accidentally tripping it are very slim. There are dual crossfire connectors for hooking up other GPUs.

Power Connectors

Power Connectors

7970 Dual BIOS Switch

7970 Dual BIOS Switch

Of course, it’s always fun to compare generations. The PCB is very nearly the same length as the 6970. The photo on the left looks shorter only because the cooler slopes on the end (as you can see above on the left). Turning them over shows they are very close to the same length. Surprisingly they did away with the backplate. I actually liked that feature as it kept components on the back of the PCB nice and cool.

6970 vs. 7970

6970 vs. 7970

6970 vs. 7970

6970 vs. 7970

I liked the prior generation’s looks, but they’ve done better this time around with a more streamlined, less boxy appearance. Though it’s not easy to photograph, the HD7970′s glossy sheen looks good and I actually prefer it to the matte black reference HD 6970 design.

Please accept my apologies for the lack of bare card photos. We were barred from removing the cooler prior to testing due to the proprietary thermal interface material and the slim post-testing time was spent compiling results and writing.

Test Systems

There are unfortunately two test setups involved when testing this GPU. Our editor splat went to the AMD tech briefing on this GPU in Austin, TX earlier this month and they understandably mistakenly shipped it to him. The stock benches were run on the same setup as all other benches in the charts you’ll see below (consisting of an i7 2600K at stock and DDR3-2133 memory). The overclocked HD 7970 results were not.

Unfortunately splat’s motherboard doesn’t like him any more and refuses to overclock, so the GPU was shipped to me (plus I have three monitors for Eyefinity). The only drawback is that the HD 7970 does not function in my P8P67 WS Revolution. That is the only socket 1155 motherboard in my posession. You can see our dilemma.

Because we had six days to work on this, we did the best we could for our readers. I took the Sandy Bridge-E socket 2011 system and hampered it to conform to the test parameters, running with four cores + HT active, dual channel DDR3-2133 RAM at stock i7 2600K speeds. Clock-for-clock, the two systems should be as nearly identical as you can make them. After testing, benchmarks between them all came very close to one another. There was one exception to the rule – 3DMark 11 just loves SNB-E for some reason. Aside from that they were all basically within margin-of-error similarity.

I’m hoping ASUS will be able to address this with a UEFI update. This is the PCI-e Gen. 3 specification problem I mentioned above. They are working on it and assuming it’s addressed, I’ll either verify the numbers are the same or correct them if necessary. For now, just note that we did the best we could in the time allotted; the difference, if any, will be minor

Overclocking

This card clocks like a champ. The only sort of voltage ‘control’ available to us was AMD’s PowerTune, which added +20% of…something. What that is I’m still not certain; doubtful it’s 20% core voltage, as that would get really hot, really quickly.

Author’s Note: Just to clarify, that is a tongue-in-cheek response to not knowing precisely how PowerTune affects voltages on the card. I do fully understand PowerTune adjusts the card’s P-states to allow it to operate at max TDP constantly. The question is how that translates to voltage adjustment, which we can’t know for sure.

Regardless, that 20% allowed us to take this GPU all the way to the maximum available clocks in Catalyst Control Center. Interestingly, where memory clocking wasn’t the 6970′s strongest area, the HD 7970 seems to enjoy overclocking there too.

Overclock Test - CCC Maxed Out

Overclock Test - CCC Maxed Out

Without batting an eye, the HD 7970 clocked right up to a completely stable 1125 MHz core speed and 1575 MHz memory speed. I can’t wait to get voltage control on these things. If it does that at stock, imagine what we can do with some more voltage available!

As an interesting side note, with a reduced-capability SNB-E system and only overclocking the GPU, that score would be third place on HWBot for 3DMark 11 – Extreme. It’s not popular like the Performance test, so there isn’t a lot of competition there, but it’s amusing how easily the HD7970 beats out the other results. If this is how it’s going to perform in our benchmarks and gaming tests, we’re in for a treat.

Performance

We’ll measure two categories of performance – 3D Benchmarking and Gaming.

3DMark Benchmarks

3DMark03 is old. Other members of our staff practically beg me to get rid of it, but despite its obsolescence this old thing still does a great job of measuring GPU horsepower. It scales as it should with GPU power and even multiple GPUs, assuming CPU overclocks (or lack thereof) remain the same.

3DMark03

3DMark03

The stock HD 7970 just edging out an overclocked GTX 580. When overclocked, it even takes out a stock HD 6990. We’re off to a great start so far.

Up next is 3DMark06; it’s a newer benchmark, which is strongly bound by CPU clocks but does show some scaling.

3DMark06

3DMark06

Very interesting. 06 goes the other way, with the GTX 580 winning out. Overclocking the cards yields very little improvement thanks to the aforementioned CPU-bound issues. It seems the GTX 580 is a bit stronger with DirectX 9. The 7970 definitely shows a fair bit of improvement over the previous generation. I’m honestly not sure what happened with the HD 6970 IceQ here as I didn’t review that one.

Moving up another DirectX level, we have DX10-based 3DMark Vantage. This one loves some CPU as well but still scales well with GPU power.

3DMark Vantage

3DMark Vantage

Back on top here. The stock HD 7970 beat the stock GTX 580 and they maintained a similar margin overclocked.  Again, when overclocked the HD 7970 outdoes its dual-GPU cousin 6990.

3DMark 11

3DMark 11

Focus more on the stock results here. We see the stock HD 7970 beats the stock GTX 580 by almost 5%. The overclocked number is heavily skewed by the LGA 2011 platform we mentioned earlier, hence the ** by the result. Regardless of platform, scoring 9123 in 3DMark 11 with a single GPU is quite impressive.

Last in the benchmarking section is HWBot’s Heaven benchmark. This is more of a game benchmark than others in this section as it tests the Heaven game engine, but it’s scored like a benchmark.

HWBot Heaven Benchmark

HWBot Heaven Benchmark

Again the HD 7970 struts its stuff, with the stock result beating the overclocked GTX 580. Overclocked doesn’t catch up to the HD 6990, but does completely separate itself from the rest of the field.

Gaming Performance

Gaming is, of course, why a lot of overclockers become overclockers in the first place. You need every bit of extra FPS you can get, right??

The first game we’ll see is HAWX 2. This one is getting a bit long in the tooth, but we have comparison numbers so we might as well have a look, no?

HAWX 2

HAWX 2

This one was an interesting way to start, with the HD 7970 showing no improvement over its predecessor and getting beaten by the GTX 580 handily. I actually had this one rerun to ensure the numbers were accurate, and indeed they are. Regardless, truthfully, who needs 200 FPS when 160 is just fine?

Dirt 2

Dirt 2

The GTX 580 shows it has some life left in this game. Only the HD 6990 beat it out. The HD 7970 definitely improved over the HD 6970, but it didn’t take top spot for Dirt 2. Same deal as HAWX 2 though; these obviously don’t take a ton of horsepower to run, so what’s 124 FPS when you’re already at 110?

Stalker takes a bit more GPU intestinal fortitude than Dirt 2, making use of strong MSAA and Tessellation. We’ve chosen the most difficult of the four tests to present.

Stalker: Call of Pripyat (Sunshafts)

Stalker: Call of Pripyat (Sunshafts)

Computing heavy workloads is where this card really struts its stuff. Stock results come in very close to an overclocked GTX 580 and the HD 6990. Overclocked, it beats them both!

Up next is two iterations of Aliens vs. Predator DirectX 11 Benchmark, one with the default benchmark settings and the other with everything cranked up as high as it will go.

Aliens vs. Predator DX11 Benchmark - Default

Aliens vs. Predator DX11 Benchmark - Default

Only the HD 6990 beats the HD 7970 here. Everything else falls by the wayside. Overclocking it just increases the distance with which it beats the competition.

Aliens vs. Predator DX11 Benchmark - High Quality

Aliens vs. Predator DX11 Benchmark - High Quality

The GTX 580 and HD 7970 are a little closer here with more power needed to compete. Even so, the HD 7970 stock beats the overclocked GTX 580, then runs away overclocked.

Last up is the newest addition to our game lineup – Battlefield 3. It was run with the testing procedure outlined in our Battlefield 3 GPU Performance and Eyefinity Experience article.

Battlefield 3

Battlefield 3

Battlefield 3 is a strong test of GPU ability. As you can see it makes a single HD 6970 cry a little bit, but a GTX 580 can handle it nicely. What it can’t do is handle it as nicely as the HD 7970 can. The HD 7970 handily beats an overclocked GTX 580 and trounces it when overclocked. It even approaches SLI GTX 580s and the HD 6990. This is certainly one powerful new GPU!

Eyefinity Testing

With the ability for a single GPU card to run six monitors, Eyefinity is obviously a strong focus for AMD with the HD 7970. Thus, we connected up three monitors and tested all of the games in Eyefinity for you. This was a tri-monitor Eyefinity setup with three 1080p displays in landscape for a total resolution of 5760 x 1080.

Stalker, Hawx 2 & Dirt 2 are all graphed together.

Stalker, HAWX 2 & Dirt 2 Eyefinity

Stalker, HAWX 2 & Dirt 2 Eyefinity

These titles definitely look great on the HD 7970. It beat the HD 6990 at stock and even more so overclocked.

Aliens vs. Predator DX 11 Benchmark Eyefinity

Aliens vs. Predator DX 11 Benchmark Eyefinity

In Aliens vs. Predator the HD 6990 managed to stay on top, but after overclocking the HD 7970, it barely did so. This thing is a beast, coming very close to the previous generation dual-GPU card.

Battlefield 3 Eyefinity

Battlefield 3 Eyefinity

Another close matchup here, with the overclocked HD 7970 actually beating out the stock HD 6990. It even gets very close to matching the overclocked HD 6990, with a higher minimum FPS.

This GPU is amazing in its ability to keep up with the HD 6990 at such high resolutions. Frankly, before testing I didn’t think it would be able to keep up with that, but the HD 7970 even surpassed it when overclocked!

Putting Some Horsepower Behind The 7970

Of course, what fun is an Overclockers review without putting some strong CPU power behind the latest and greatest GPU? In this case, I unhindered the i7 3960X back to its 6-core/12-thread, quad-channel RAM goodness and clocked it to 5050 MHz. Then I cranked the GPU as far as it would go and ran 3DMark 06, Vantage and 11.

3DMark06 - 37735

3DMark06 - 37735

This managed to beat my previous personal best on a Sparkle X580. To get there before, I flashed the X580′s BIOS to allow greater voltage control and ran using the LoD (Level of Detail) turned down to max out the score. This just took overclocking with CCC. This is definitely a strong showing.

3DMark Vantage - 42247

3DMark Vantage - 42247

Now, Vantage is a completely different story. This score is a new best for our team, beating Miahallen’s very strong score by over 3000 points. He was using liquid nitrogen on the CPU and GPU too; this is with water on the CPU and stock cooling on the GPU.

3DMark 11 - 9679

3DMark 11 - 9679

This 3DMark 11 score – not too far short of 10,000 on a single stock-cooled GPU – is also a new team best, beating, coincidentally, another of Miahallen’s score (but only by 12 points)…I sense a crosshair on my back!

With stellar results like this, these things are going to lead to quite a voracious 3D benchmarking competition!

Final Thoughts & Conclusion

This review really surprised me. I expected it to be better than the HD 6970 but not to the level of the HD 6990. I had hoped it would beat the GTX 580, but didn’t have a clue by how much it would do so.

Instead, we see a GPU that is tantamount to parity with the dual-GPU beast HD 6990 and one that, for the most part, leaves the GTX 580 squarely in its dust. When it comes to strong computing power required for MSAA and Tessellation, it separates itself even farther (look at the Stalker and Battlefield 3 results!).

It seems to falter slightly with excessively high FPS games (such as Dirt 2 and HAWX 2…do you really need over 100 FPS? …over 200?) and, apparently, DirectX 9 as we see via 3DMark06. How many people currently play DirectX 9 games again? I’ll go with slim to none. Those that do will just have to make due with 150FPS instead of 200 FPS. Think you can handle that?

Back to heavy-duty graphics computations, go look at those Eyefinity results again. If the HD 7970 isn’t beating the HD 6990 at stock, it’s getting pretty darn close when overclocked. That was the one thing that surprised me the most. I figured at 1080p we’d see a close battle between the two, but when you put three monitors on the HD 7970 it would have to show some weakness. I figured wrong.

Of course a main concern will be how much this GPU costs. Unless I’m mistaken it is the first publicly available PCI-e 3.0 GPU, so there’s one thing to make it more expensive. Then you have the fact that it beats out the GTX 580, making it the most powerful single GPU on the planet. Any guesses where this will end up?

Ok, enough suspense. AMD is putting this card at an MSRP of $549, with expected availability on January 9, 2012. That’s about $50 higher than your average GTX 580 with 1.5 GB of RAM. What’s really interesting is that, if you narrow down Newegg results to show only 3 GB models of GTX 580, not one of them is cheaper than $549. Zotac comes close, with an out of stock model for $555, but the others are EVGA and start at $590, going up from there.

Retailers will surely tack on an early-adopter premium, but if they somehow miraculously release at their MSRP, the GTX 580 is going to need a price drop before long. Of course, NVIDIA is supposed to release its new Kepler architecture in 2012. AMD has beat them to the punch, and upped the ante. In some areas, even an overclocked GTX 580 can’t match the new HD 7970 beast at stock. 2012 just got very interesting for GPUs.

AMD has taken back the hill. It’s your move NVIDIA.

Jeremy Vaughan (hokiealumnus)

Tags: , , , , , , ,

112 Comments:

Brolloks's Avatar
Very impressive, not as strong as I had hoped given the price tag but still a great offering from AMD, now the wait to get my hands on it.

Great review Hokie as always, looks like you had loads of fun
hokiealumnus's Avatar
Thanks! It was a wild six days. The only thing that would have been better is if there were voltage control.
Brolloks's Avatar
The OC on that card at stock volts is really impressive, with a few extra volts it should easily get to 1200-1300 on the core, very promising indeed, and that is with initial drivers as well !!
zitnik's Avatar
Good review, Hokie. That card is a monster. If I hadn't bought these 570s I'd probably be going for that beast. I can't believe how badly it just wrecks the 580. What I thought was most impressive is how close it's performance is to a 6990, not too far off.

Amazing looking card.
Bobnova's Avatar
It's not as good as I hoped, but better than I expected.

I can't wait for voltage control OCing results


Too bad it's about $400 too expensive for me.
Ivy's Avatar
Being able to reach such high frequencys surely is totaly unique to the GPU history and i could guess that many OCers will have fun with trying to get the max out possible and bragging around with achieved GPU clocks. The full potential is surely not revealed yet and i would say it was a nice christmas present and im surely fine with, good news. It can always be stronger but i never expected more than that. I did not even expect a release for this year (although a very limited one). Considering all the issues AMD have had recently, its surely a nice surprise to show the beating of the mighty 580 GTX just short time before X-mas, that stuff was pretty successful.

From the raw overview i currently got, at DX 9+10 it might be 5-10% stronger in overall but not much more than that. However on DX11 its clearly stronger, about 20 % in many cases. It seems that the GPU simply is optimized very strong for DX11 performance, which seems fine as long as it can be equal or even a small bit above on other DX stuff at least, and thats surely possible. After all we still know way to less, guess we will have a X-mas full of testing.
kskwerl's Avatar
Awesome review, can't wait to get my hands on one of these!
Ivy's Avatar
Well, they aswell will have to optimize drivers (and devs can optimize software), which are surely not fully optimized yet, i dont think that the last word is spoken yet, testing will keep people stunned and tuned.
ssjwizard's Avatar
With the heavily revamped compute oriented design they put into these new GPUs I bet that they are F@H monsters.
Ivy's Avatar
I think 4 of those (quadfire) will soon become the new "worlds fastest PC", combined with a dual CPU board. The compute power surely is never seen before for a single chip, about 3.8 Teraflops each GPU, its totaly crazy and will take a while till we see a counterattack.
pwnmachine's Avatar
I agree, I just hope we can sort out the quadfire woes. Nonetheless a trifire setup with 3 of these badboys =
Ivy's Avatar
Well, even trifire is probably able to be comparable on DX11 extreme preset (which is theyr strong spot), we dont know how much those chips can still be overclocked. Overclockers from what i know didnt use voltage mods still, i truly wonder what will happen when we squeeze them out like a tomato... I wish i could have the fun...
In term 3 of them beats (or are at least equal) 4 GTX580, its very remarkable, because much lesser power needed, thats advancement.

The only thing which is a bit sad is that it was a "paper launch", since we cant get them in stores right now (if so, only a handful). However, i do have understanding on that matter since AMD simply wanted to spread the news as fast as possible. Some users might take it bad but most of them should see it as something positive, since it surely was able to sweeten theyr Xmas, with or without physical card. I hope people can actually get them in the first two weeks of next year!
Bassplayer's Avatar
Hokie... any ideas on subzero performance? Is that coming in the near future?
I.M.O.G.'s Avatar
Hokie doesn't have a GPU pot, but I believe we have some other samples in the works as well as a loaner GPU pot floating around - so I'd call that a maybe? I'd be interested in helping make that happen.
David's Avatar
Nice review.

I only had time to skim it - are there any full SNB-E results? Unleash all the SNB-E cores, overclocked, with an overclocked 7970 then an overclocked 580. It should net you some pretty high scores, and give you an idea how the top (single card) set-up these days should perform?

I suspect of course that most people with SNB-E cash will not stop at one GPU
Ironsmack's Avatar
Looks like a great upgrade for me when IB hits retail shelf and 2 of these sexy cards
pan1cattack's Avatar
Great review Hokie!
Ivy's Avatar
Did they truly achieve 1125 MHZ only by using CCC and without any volt mods? I mean thats shocking to hear, forbidden stuff. 144 Gtexel each core? So in CF its close to 300 Gtexel? When i think about, AMDs Radeon placed a hard hit on Nvidia, i must say. It may not always look as good on the paper but i say again, we have to take into account that many games and drivers simply are not optimized for that GPU still, same counts for the used systems. Well, Merry Xmas and thanks to AMD for sharing, still. For me its to soon to buy another GPU, maybe on my next Ivy Bridge PC, so all i wanted is to hear enjoyable news. But im pretty sure the people who wanna buy the stuff will be provided with in a few weeks, just have some patience!

Thanks for review, great stuff, i truly had to gaze on it for some hours, i usualy dont do that. But it kinda was enjoyable to read in some way.
pwnmachine's Avatar
My thoughts exactly, as always ati sets the standard.
watchthisspace's Avatar
Thanks for the great review!! I will be buying one of these bad boys when Ivy bridge is out
Badbonji's Avatar
I am thinking of doing the same, providing I have the funds! (Might have to get 2!)
dejo's Avatar
very informative review, Jeremy. I love ready most of the ocforums future product reviews, and this was no different.

I think that the msi afterburner 2.2 beta is supposed to be updated for this card. dont have the card to test. You can either try it yourself or send me the card to try.
Woomack's Avatar
nice card, nice review ... but I'm already sad about price as I was counting to buy some new card soon, just not for so much ...
mxthunder's Avatar
So, someone reviewed this card, but didn't even know what the power tune feature of CCC increases?
Hard to take it seriously when I read things like that.

Also, lots of people play Dx9 games. COD MW3, Skyrim, The Witcher 2, etc....

Turning down detail levels to raise a benchmark score? What is going on here? I am astonished at the things I have read in this review.
Zarck's Avatar
For the GPGPU GRID, a test with Radeon 7970 and Folding@Home it's possible ?

https://fah-web.stanford.edu/project...ki/BetaRelease

Robert17's Avatar
Very nice review. Like you said, what a great way to GPU into 2012 !

Thanks and MERRY CHRISTMAS !
Angry's Avatar
You jumped the gun a bit there man...
When Hokie said he turned down the detail level, hes talking about when he benched his Sparkle GTX 580 orginally for submitting to hwbot for our bench team. Everyone does this for the bot.
But not for reviews, Hokie did everything at defaults as far as I can tell. And made an excellent review.


Damn Im 'ing over here....!! Tax return? maybe seeing it has some badass Eyefinity power, and I can buy the monitors one at a time.
hokiealumnus's Avatar
Not yet. Without voltage control it's pretty much moot. Hopefully we'll get either solid voltage control via software or hard-mod info soon. If either of those happens, I'll be requesting that loaner pot IMOG mentioned.

Thanks. There were full runs, yes; check out the "Putting Some Horsepower Behind The 7970" section for full SNB-E + 7970 results.

Yep, easy as that. Put Power Tune on the +20% level and crank it up to CCC's max. On auto, it barely even spun up the fan speed during testing.

Thanks! I tried the most recent version I could find (failed miserably) and then we asked MSI for any in-house betas. They don't have one that functions yet but are working on it. I have some (slim) hope that they'll have a working beta by the time partners launch on January 9th.

Sigh. PowerTune increases "board power". What it really does is peg the p-states as high as they can go to force the board to operate at its max TDP at all times. What I wrote was a tongue-in-cheek response to not having a clue how that actually affects voltage. Can you tell me precisely how much voltage that setting increases on what components from what's available at 0% to put it at max TDP?

All detail levels were set as high as they could go for every test (except for AvP: Default, as noted in the review...AvP high-as-it-would-go is right below that result).

If you're complaining about my mention of using LoD when I was running 3DMark06 on a different card (not this one), you are obviously not a benchmarker. That is common practice among anybody who has a clue what they're doing on HWBot. That was referring to competitive benchmarking and had nothing to do with the results when comparing it to other cards. If you had actually read the paragraph, you would have also seen that I did that on the Sparkle X580 trying to get a better score but did NOT do it on this card and it still beat out the X580.
MattNo5ss's Avatar
Good review, especially for the limited time available...

The HD7970 seems to handle high res, high AA, tessellation, and DX11 very well by looking at the Heaven, STALKER, AvP, and BF3 results. I'm surprised using 3DMark software we only get a less than 10% improvement over the GTX580, and the GTX580 performed better in HAWX2 and Dirt2.

This card desperately needs some voltage control. 1125MHz on stock volts is nuts!
hokiealumnus's Avatar
I was surprised about HAWX 2 & Dirt 2 too. It's almost as if there is a limit to its low-power-computing ability. I know that makes no sense, it's just how the results 'feel'. When you crank the detail, MSAA and tesselation up though, it powers through like (literally) no other.

Benchmarks didn't surprise me per se', but the relationship is interesting. The HD6970-to-GTX580 improvment is remarkably similar to the GTX580-to-HD7970 improvement.

What did surprise me is this card's ability to get an insane vantage score. That score (42247) is #41 in the single-GPU rankings. On stock cooling!
MattNo5ss's Avatar
From looking at results around yours, the Vantage GPU sub-score with the HD7970 at 1125/1500 MHz is equivalent to a GTX580 at 1350-1400/1250-1300 MHz
notJUSTguitar's Avatar
I'm gonna have to sell my 6970 to get one of these!
It really sucks that I bought it last month...but I NEED, a 7970.
bmwbaxter's Avatar
Another great review.

I would like to point out that the 580 matrix in the graphs has a factory overclock a small one. So the results would be even better when compared to a reference 580.

Time to sell my cards and buy one.
mxthunder's Avatar

Not questioning your knowledge, it just seems like something funny to say in a review.
Nope, im not a "benchmarker" I run these tests at defaults so I can get a true, relevant score with other people.
Powertune does not change the voltage at all, it just allows for a higher voltage ceiling for when you do crank it up, so that it doesn't throttle back when it finds out its going way over tdp
Janus67's Avatar
Fantastic review, as always, Jeremy!

That card does look phenomenal. Now to see if the rumors of a 580 price drop come to fruition. I don't have room for another (I'm happy with both of mine ) but that 7970 is definitely a solid performer.

Can't wait to see some LN2 results
RedDragon1260's Avatar
I like it although i am not a reference card style fan so i think i will stick with the MSI 6970 i want until i can see a better priced non-reference card available which i know will not be till some time late next year.
Ivy's Avatar
I would say, welcome to the power of the seven. I7, Radeon 7, 79X, Win 7. Its rapidly growing to a serious marketing name... a simple number but apparently it brings luck. Maybe there is more than just a mistletoe.
I.M.O.G.'s Avatar
I'm not sure that is accurate. On Bulldozer, a change in P-state directly correlates to a change in frequency as well as power. This can be readily observed by running PSCheck. There isn't a publicly available tool like PSCheck for this card, however I would expect that a change in P-state via Powertune would correlate with a change in base Voltage. I don't know obviously, just wondering on the topic.
Ivy's Avatar
From what i know the 7000 series does indeed contain that zero core ability. It seems to work as follow: People will have to set "turn monitor off" in the Win energy saving settings. At that moment it happens the GPU will almost completly disable itself and the fan will stop to work, it kinda will reach a hibernate mode but as soon as the screen is turned ON (simply move the mouse) then the GPU instantly will come alive. In that condition it seems to only use 3 W of power. Seems like thats a worlds first by Radeon.

However, the usual idle mode, with screensaver and such, is in need of aproximately 15 W, which is aswell a increase because previously a GPU needed above 24 W. In standart Idle mode it will clock itself down a very high margin, several times lower clock, thats how it is possible to use so few power.

GPUs nowadays surely are extremely hightech and pretty fascinating. AMDs Radeon usualy was always playing a forerunner role on most of those stuff. Ofc the GPUs at high load are still burning a lot of energy, however, when we consider theyr insane computing power its something which can be tolerated to a certain extend.

Dry ice isnt possible yet, since overvolt options is a must have in order to execute. It has been stated that the 7970 was able to reach the 41. rank of those kind of dry ice cooled stuff. So it means, its probably in the upper range of a absolutly supertuned 580 GTX, and the 7970 was still air cooled, not overvolted, and not on dry ice... so the gain can still be massive and it surely will beat any 580 GTX by a clear margin as soon as they are on "dry ice".

Besides (just found out lol).
zero core means: 3W "super idle mode".
sub zero: OCer jargon for "dry ice cooling".
I.M.O.G.'s Avatar
Ivy, he was asking about dry ice or liquid nitrogen cooling on it... He wants to know what sort of numbers it can put up if its super cooled.
Ivy's Avatar
Yeah i noticed myself short time after i was thinking about, i kinda did mistaken it with that other ability, but then i had in mind.. why is he asking for performance, since it is no matter for that technology.. and then... "ah... it must be dry ice" (and im no extreme OCer so its not very confident to me ). Well i simply explained both stuff, some people might be interested and are not fully sure about.
Evilsizer's Avatar
thanks for the review hookie!

not to take away from his great review, HS has a LGA1366 vs SB-E.
http://www.hardwaresecrets.com/artic...-Review/1458/1

it is also funny that even with the newest video card on SB-E with pcie 3.0, there is no improvement in performance. even with this latest card it doesnt fully saturate PCIE 2.0 x16.
Ivy's Avatar
Hmm, the CPU scaling seems to be close to none in many cases, for games that is. I think even as a super enthusiast i wouldnt switch from Nehalem to SB because its just not worth it, not when you mainly need the performance for games and no other special stuff. For most media center functions, even a baby CPU still runs fine with, and a GPU is aswell helping with decoding, its simply not a matter anymore. Pointing at PCIE 3.0 GEN, its aswell close to no effect by now, no matter what CPU is used, that im pretty sure is true. So, it looks good on the paper but thats it. Finally the most important stuff is still to have a powerful chip architecture and good software, thats truly like 90% of the cookie.

Looking at his great tests, CPU Nehalem flagship vs. SB flagship, in most cases only 20% difference, and for games close to no difference at all. http://www.hardwaresecrets.com/artic...Review/1429/16

Surely, for updating a PC, a single generation is not even worth to look at. Anyway, in usual i only update stuff every 4 years, however, im now in the worse situation that my old PC died and now i have no replacement anymore and no "Test-PC" which is important to me in term there is issues. So i guess i simply wait for Ivy bridge and will put a 7000 series Radeon inside at that time. Having 2 working PCs is critical to me, else as soon as i get a issue i cant test and could be stuck with a unstable system and no way to pin the issue down in a effective way. Thats everyones true horror, and thats why a backup is such a big gain. On the other hand, giving the PC to a PC tech, can cost several houndred dollar in my country, with that cash i can aswell buy a new Radeon GPU, and im good enough to handle the stuff myself, in term i have a second PC. Besides, my friend/family they all got either a Laptop or a Mac Mini, so i cant test on theyr systems. Im the only "strong hardware" user from my whole family. Not that i am "conventional" i dont even know that word... i deal with SFF or i go build a super high end tower someday, but im truly not interested into "standart", its just not interesting for me and i like to study and making impossible stuff possible... thats why we are OCers or "impossible stuff builder"... at least many of us.

To come back to the main topic, yes, the 7000 series surely is aswell... something every OCer should truly feel very eager to get theyr hands on and im not different and see it as a new opportunity for a new system.
EarthDog's Avatar
Looks like a monster card... cheers to AMD for making it interesting in the GPU segement.

ANd as usual, a top notch review and in record time. GJ Hokie!
SupaMonkey's Avatar
Are there any tests on its performance in 3D? Playing in 3D kills my card.
Angry's Avatar
Actually, from what I heard, AMD sent the chips to Asus, MSI, and etc early and told them they didnt have to launch with the reference design, that they could pretty much do what they wanted.

Now wether or not they decide to do this, is up to them, but I dont think you will that many reference designs this release.
Salmon91's Avatar
Will the cards be available on the 9th or will I be able to pre-order on the 9th?

Or could we have the possibility of pre-ordering the cards before the 9th? Maybe even this year?
dejo's Avatar
I am wondering if anyone has a pair of them to test scaling in xfire. would be very interesting to see what 2 of these brutes would do in some competition benchmarks
EarthDog's Avatar
I think you will see the opposite of this actually...

Normally reference designs hit the store shelves first anyway. Second I think, at least for this launch, quite the opposite happened. Im not sure board partners got much more if any (I also heard one company didnt receive the reference cards until AFTER the select group of reviewers got them) lead time this go around.

AS far as LATE next year, I would imagine we will see these come out shortly after release. "Late next year" means 3Q to me and thats a but silly to think that is when the first non reference boards will come out. I would expect them in February to March personally.
Brolloks's Avatar
That will yield some pretty hefty benchmarks I'm sure, AMD has really invested a lot of effort in maximizing CF performance from the 6800 series onward, much more than their green competitors with SLI
I.M.O.G.'s Avatar
Hardwareheaven did crossfire tests, and it crushed stuff. Good read, and they also have pretty artwork, though it looks like their ad sales department is owned by AMD lol:
http://www.hardwareheaven.com/review...roduction.html

We also have more than one at this point if I'm not mistaken, so we may be doing our own results if we can get them in the same location.

As for PCIe2.0 vs PCIe3.0 - stay tuned for further results in those departments. Some reviews are reporting no benefits, others are reporting benefits - it depends on how its tested, and possibly also on configuration of the test system. Some are saying benefit is only seen in GPU compute situations which are bandwidth heavy, others are saying there was no difference in their tests.
mxthunder's Avatar
I think they are a little AMD bias'd. They pride themselves on "bench marking practice" yet they turned on phyx for the Batman testing. What better way to kill framerates on the nvidia side.
I.M.O.G.'s Avatar
Wow, I didn't look at that part. That is pretty bad testing methodology - if they wanted to show those results, they should have also presented the results without physx.
Ivy's Avatar
, anyway, the point they make.. do you need PhysX nowadays when Intel is delivering us CPUs with Alienpower? Finally the result counts and if Nvidia want to crush themself by overusing theyr GPU... well yeah, i guess they just wanted to make a point. My view is, the GPU on next gen graphics got so much rendering to handle, it truly would be happy to hand over the physics job to a CPU, and they truly can do that. Its just not true when Nvidia is telling us that only a GPU can be doing it, i dont believe it. For games a CPU usualy is handling AI and physics, thats what they are here for, in theory. Many years ago (C2D and older) CPUs surely had much higher impact on gaming performance because they had more work to do, nowadays thats pretty much GPU limited (especially on high res).
bmwbaxter's Avatar
an nvidia GPU with physX is way better than a cpu at physics type work. have you played a game that supports PhysX on an nVidia card? it totally blows away anything the cpu can bring. yes it is a load on the gpu but it is also something that AMD does not even offer so having it on while comparing it to and AMD gpu is not fair and IMO takes away credibility for doing so. because if they did that there, where else did they give the card an unfair advantage over sli gtx 580's


(this is in regards to hardware secrets and is in no way implying anything at all towards the overclockers.com review of the 7970)
Ivy's Avatar
Havoc should be able to do the same thing, if the CPU goes down the roof it can aswell be used on Havoc, although im truly no expert at those stuff.

They truly should make something non proprietary and it took a long time till Nvidia only partially gave away theyr proprietary rights to own any single piece of it and therefore make devs unable to implement anything like that on Radeon, as far as i know. However, the statement of AMD/ATI was always pretty clear, that they dont feel the urge need of something like that. So they usualy gave pretty low support to devs tryring to implement something. I remember, Nvidia had aswell other software which is licensed and when MS was moving from a Nvidia to a ATI GPU on Xbox, they had to pay license fee for even being able to run that kind of software on a Radeon. I mean, Nvidia truly was mean with stuff like this, im glad they finally tried to open the matters a bit, giving devs more access to implement stuff on Radeon.

I aswell dunno what you mean by "blow away" in particullar, you might tell me, im interested. At a game like Shogun Wars with a pretty high physics need (insane amount of units), a CPU still can successfully run it. Now what kind of phyics do you have in mind which is unable to be run on any CPU?

Finally i do want the better hardware to win, not the one being a wonder of software implementations. Besides, devs are always free to create open source for Radeon, but apparently rarely anyone ever going to do it and dunno why. Maybe because the devs nowadays just want to have everything shoven inside theyr mouth, already warmed up and served on a silver plate, seriously, where is the true dev skill? Are we nowadays totaly ruled and bound by the harsh rules of money and economics? So we aswell start to move away more and more from PCs because consoles simply are more bucks? And so we lose a lot of quality because many of those dont even try to tune a game for good PC hardware anymore..? Well yeah.. who knows.

Its surely good that Nvidia is trying to drag devs toward theyr side but they truly have to work together with devs and aswell with AMDs Radeon somehow, not only trying to boost themself. Even as a competitor working together in many aspects brings more power regarding software tuning.
bmwbaxter's Avatar
This will be my last post that has anything to do with this subject so as not to derail the thread to much

Mafia II the physics (glass breaking, car blowing up, clothes blowing, pretty much everything) in that game with PhysX enabled was IMO so much more realistic than with just the cpu handling physics. I have 2 GTX 580's and only run 1 1920x1080 monitor so fps was not an issue with PhysX enabled so I tried it with PhysX enabled and with just cpu. GPU's blow away any cpu in these type of situations.
Ivy's Avatar
It surely is a interesting topic but maybe we can continue somewhere else, since we basically are focussed on 7000 series stuff now . I surely wouldnt generally say that a CPU cant handle, but there are exceptions, there is no rule without exception.
pwnmachine's Avatar

That was un-called-for, don't you think? -hokie



I can't wait to see some tri/quad-fire benchmarks.
pwnmachine's Avatar
I agree 100%, PhysX when used to its full potential is quite amazing. The only reason it is not used to it full capacity is simply the fact that there is no install base. Therefore developers see no benefit of adding the extra effort to implement it in any meaningful way.

Look at Cryostasis; imo one of the best survival horror games released this generation. Amazing tech flawless physx implementation however its sales were poor. That is all that really matters to developers.
Ivy's Avatar
Why do you think that i say that we finally should get ride of proprietary stuff and try to set a certain "shared standart". Btw: Agreement isnt issue, respect is.
I.M.O.G.'s Avatar
hardwareheaven (not hardwaresecrets) said they ran Batman with physics on because everyone with a nvidia card would run the game with physics on, so it was more of a real life result.

I disagree with their approach and presentation, however they did make it clear how the test was run and that is important.
Shelnutt2's Avatar
Hoki, I just wanted to drop in and say thanks for thinking about fah again!! I know you didn't have time to run it in the short lead but you thought about it, and I really think if we become known for always throwing in some fah benches on hardware we might just get a few extra page views. So here is to looking forward to the results (and hoping AMD is catching up to nvidia in fah!)
Bobnova's Avatar
The thing is, turning physx off doesn't show you the max the CPU can do, it shows you the max Nvidia allowed the game designer to run on the CPU.
A modern top end CPU can run a far better physics model than they get credit for, because nvidia seems to demand that all the "physx" type eye candy be disabled if physx is turned off, even the stuff that could run happily on the CPU.
hokiealumnus's Avatar
Just sorry I couldn't run it in time. I haven't run a gpu client for years since an 8800GTX. If there are any tips or tricks, definitely pm them. I'll try to get some numbers in the coming week for you.
MIAHALLEN's Avatar
I know, I know....I'm slow

Good read Jeremy, nice work taking down my scores....those were leading the team for far too long
Ivy's Avatar
Thank you!
I mean, a CPU could do more than that but Nvidia kinda will disable it on purpose. They are telling us that actually as good as any PhysX is impossible on CPU, and we could get a 1 million core supercomputer and they will still say the same. People should stay tuned for future CPUs, the Ivy bridge with new transistors and what else... CPUs arnt sleeping. Its however true that on a few games the physX still may own, but its truly just a hand full and for the majority barely worth it, especially when they have no interest in such games (i am RPG gamer so i got little interest into Shooter). Aswell i am not sure how effective the PhysX programming truly is executed, but one thing im sure about: CPU can do more than what they currently do, thats why CPU have almost no impact on games anymore. Only theyr clocks truly matters, architecture is almost wasted, means that most parts of CPU isnt used. Im sure that Intel will build some true monster CPUs while AMD will build a monster Radeon i guess... it kinda looks like. Strong CPUs are currently underused for most games... a overkill to even own them. Any other program will get more gain than a game.

Im not truly a fan of whatelse but at the current condition i rather support AMDs view, because CPU/GPU was always a team in the past and Nvidia slowly is trying to break a highly efficient and powerful team in order to make theyr GPU look superior. But no matter what, they have to work together for a shared standart which can easely be implemented by the devs and which does use CPUs more effective, its not a useless part.

Performance wise in scientific manner, a current SB flagships can handle about 120 GFLOP in double precision, a 7970 will handle around 950 GFLOP in double precision (the current stongest single). However, GPUs used for PhysX arnt usualy high end, and if we try to run it on a single GPU, then the GPU will break down by ~15-30%**, because thats the load the CPU is able to take away. (**Much higher than 15% when GPU is weaker than current Radeon flagship). I get the feeling that it will increase in near future, because its Intel. As long as physics can be run on roughly 100 GFLOPs then it may run without GPU, hard logics and the GPU got more power for rendering. But even more powerful is to share the physics so the CPU is fully utilized and the rest can still be taken over by the GPU, there simply will be a slider which we can adjust how much physics load we want to hand over to CPU, 1 to 100% (in term the CPU is overused it will simply destroy performance, kinda same such as on a overused GPU). Ofc the CPU still got the advantage to be the "jack of all trades" while a GPU is always very hard to adapt to something else, so main focus have to be on how to get the 2 GPUs (Radeon/Geforce) to a shared standart on such terms. Master software is still not on this planet, its somewhere on a unknown planet.

Why a CPU isnt stronger than that? Huge amount of transistors are used for cache (billions!). But we soon come to a point where we dont need so many of it anymore and can instead increase raw computing performance with. But well, a 8 core Ivy Bridge having 3D transistors, i wonder its computing performance. Prehaps 150-200 GFLOP. Some might get dual CPU board, 300-400 GFLOP? Especially for such people a engine which can hand over dutys to CPU is critical. Who knows, but its not weak and its much more adaptive to whatelse. But they will take theyr time and slowly walk up the ladder, and why? Because they can, no competition.

Because its going to much into OT (not truly topic related because to much CPU/gaming talk) i will continue the specific terms on: http://www.overclockers.com/forums/s...94#post7056694
hokiealumnus's Avatar
Awesome. Look at those memory clocks too. Over the next-highest result, that is a 400 MHz core clock improvement over a GTX580 (1180 vs. the next-hightest 580 at 1560). These things are DX11 beasts!
Ivy's Avatar
To sad phils processor didnt clock higher, else it would be completly devastating on DX11. 1610 memory clock, ok not world record but extremely high and the 7970 with its new 384 bit bus can make big use of it. I mean that thing is on stock cooling, the processor/memory can still go up!

Short time after release, only a few 7970 ever tested yet, and the superclocked 7970 beats it by a large margin, im impressed. Also, the Radeon seems to work on quad fire it seems? The highest is currently 6970 on quad. However, that will be beaten soon.

Although the Unigine Heaven is a pretty demanding engine which makes high use of the tesselation. But i still didnt expect to have the 580 GTX beating that soon at those superclocked ones. We still only had a bundle of 7970 tested ever.

Quote:
No more competition with HD 7970... the GTX 580 era finished...
LawL_Vengeance's Avatar
I would love to own a card like this, money isnt the issue, i can save up for it np. However, i would need to remake my entire computer to handle the dang thing. Stats: Phenom II 1055T 2.8GHz X6
ASRock N68 VS3 UCC MOBO
AMD HD 6670 1GB Graphics Card
2x 4GB DDR3 1333MHz RAM
450W Power Supply
Nvidia Geforce 7025/630a Chipset

Yea, im good on RAM, maybe i need some with better MHz? Need a better case. and power supply. PS!!! Hint: its alot harder to switch out parts in a gaming comp as to just building your own from case up.
hokiealumnus's Avatar
If you're serious that money isn't an issue, you might want to just sell the whole prebuilt unit as a whole and do just that, building from the case up.
Janus67's Avatar
To be fair, when people with ATI cards run Heaven they disable Tesselation in their drivers to get a much higher score.
EarthDog's Avatar


Absolutely. Thats a HUGE difference since you can adjust the level of tessellation in the benchmark.
hokiealumnus's Avatar
Another big +1. I just got a new personal best 3DMark 11 using that method plus some extra MHz on the GPU (still air cooled with stock cooler).



(HWBot link.)
Ivy's Avatar
I dunno what is it you wanted to tell me?

After all they got a few pride left, really really + really!

The 7000 series indeed handles a hard punch on tesselation, they are strong at that. The 6000 series however surely will go down to the kneel at some point. The Radeon 7000 isnt tuned for pointless performance ratings, because its using a architecture which does handle demanding setting much better, leaving the most powerful parts unused when that stuff isnt running. Sure, rating still goes up but not barely as much such as on Geforce card, so its not actually a advantage. On some of those performance ratings they may get several thousand FPS in some cases and then they gonna tell me that theyr CPU is limiting, i mean, yes, thats indeed sad.

Whatever.
Janus67's Avatar
You mentioned that the card did well with the Tesselation of Heaven. What I'm telling you is that when people benchmark Heaven with an ATI card they turn off Tesselation in their drivers (nVidia doesn't have this 'feature') thus giving a much higher score on ATI cards since it doesn't have to render the Tesselation of the benchmark, but nVidia does.
hokiealumnus's Avatar
To be clear, Janus is talking about competetive benchmarking. When I run Heaven in a review, CCC is set at its default settings, with no artificial Tesselation manipulation.
Ivy's Avatar
I know what he mean, i did understand it from the beginning but its cheating, thats how i call it. I said that im unable to understand because i didnt actually expect such benchers using dirty tricks and i simply leave it by that. But finally the correct stuff is to leave it at default, means it should use it at the level such as Nvidia does. Its not a bad option to be able to make such finetuning (immense gain for real situation, when you want to get the most out of it) but sadly it does allow for unfair behaviours.

However, with or without tesselation, the 7000 series will clearly beat the 580 GTX in long term, because neither the games nor the drivers are fully optimized yet, the hardware is simply to much of a child at the current date. It already is beating by 20% (overall) at real situations, but the distance will be even higher and surely no "cheating" needed, there is no need for it, its tesselation is superior to the 580 GTX. Even allowing so much finetuning is a clear sign of its superiority. AMD/ATI was always the forerunner of tesselation. Just with the Fermi architecture Nvidia truly was pumping massive effort into it and did even stomp down the 6000 series on raw performance. However, the 7000 series will get the throne back and continuing where AMD/ATI started on its very tesselation capable consumer cards.
Janus67's Avatar
It isn't cheating if you are just changing settings that are in the company-supplied drivers page, now if people were opening up .dlls and making it so that the card couldn't render Tesselation when it should be doing so then that is breaking rules.

It just sucks that nVidia doesn't have the same capability, especially for people that are benchers.
hokiealumnus's Avatar
I don't appreciate being called a cheater or your claim that I'm (along with any benchmarker worth their salt) using dirty tricks. If the person that runs HWBot (Massman) asks whether my initial score used that tweak, then makes a comment on how the score will improve if not, I'd say it's pretty well documented that tweak is not cheating.

If I were using that setting to artificially increase comparison results in any review, I'd wholeheartedly agree that's crooked and dishonest. We do not do that here. As I already said, all comparison benchmarks are run at default settings; except overclocked comparison benches, where the only thing changed are clockspeeds and, if available, voltage. Even my pushing-the-envelope benchmarks (screenshots toward the end of the review) for reviews are run at default settings. Only after the reviews are completed do I work on additional tweaking to see how scores can be improved.

You are free to disagree and it's no skin off my back if you want to think a legal, known and readily admitted tweak shouldn't be allowed; but do not come here and accuse me of cheating. That, sir, is one thing I do not, have not and will not do.
hokiealumnus's Avatar
A couple improved results after a quick 30min livestream this afternoon.

3DMark Vantage

Old: 42247
  • 41st 1x GPU global score
  • Boints: UGP - 32.3, UHP - 2.0, GTPP - 74.0, HTTP - 5.7

New result from livestream: 42895
  • 33rd 1x GPU global score
  • Boints: UGP - 34.4, UHP - 2.0 - GTTP - 77.0, HTTP - 5.7



HWBot Heaven Xtreme

Old: 2156.68
  • 89th 1x GPU global score
  • Boints: UGP - 18.6, UHP - 1.0, GTPP - 30.2, HTTP - 3.3

New result from livestream: 2660.23
  • 8th 1x GPU global score
  • Boints: UGP - 54.5, UHP - 1.5 - GTTP - 52.4, HTTP - 4.4



Only two results, but they're both much better than the originals!
Seebs's Avatar
Come on Jeremy... You're only 58 marks away from the WR for Heaven... Go get them..
bmwbaxter's Avatar
+1

Do it, Do it Nao!
Janus67's Avatar
I think it feels the need to be a little chilly
hokiealumnus's Avatar
Done...by 150 marks.

Result from earlier livestream: 2660.23
  • 8th 1x GPU global score
  • Boints: UGP - 54.5, UHP - 1.5 - GTTP - 52.4, HTTP - 4.4

New world record result: 2869.539
  • 1st 1x GPU global score
  • Boints: UGP - 159.1, UHP - 2.0 - GTTP - 185.6, HTTP - 5.5

bmwbaxter's Avatar
Nice! You da man.
notJUSTguitar's Avatar
WOW awesome

If you ran some benches with an AMD cpu, would the results be way lower?
And would a 2600/2700K be able to get close to that?

And i'm bummed i missed the livestream.
hokiealumnus's Avatar
Not positive about heaven, but vantage and 11 would be way lower on amd.

I'm going to try and stream again tonight or tomorrow. Watch the benching section.
I.M.O.G.'s Avatar
Awesome work Hokie, you the man! I'll keep an eye out for the stream, was busy this afternoon and missed it.

AMD cpu's would do poorly, though it may not be a huge difference - heaven is one of the better 3d benchmarks for isolating the GPU performance. Other 3d benchmarks, like 3d03 especially, are well known for being very dependent upon the CPU performance - my Bulldozer at 7GHz with a 5870 at 1300/1200 put up a score comparable to i7 920 on a moderate overclock with the 5870 at 1050/1200.

A 2600k or 2700k would put up about the same scores, possibly better as many of those clock higher than 5GHz. Again though, I don't think heaven is that sensitive to CPU clock speed.
notJUSTguitar's Avatar
Ok thanks. I kinda meant benchmarks in general with 7970 but AMD vs Intel scores.

I might switch to Intel later this year & water cool if I have the money.

Ok, i'll watch the benching section
Janus67's Avatar
Congrats Jeremy, that is fantastic. Really makes me consider selling my 580s... oh dear.
Needitcooler's Avatar
Oh man. The 9th can't come fast enough.

Excellent work!!!
I.M.O.G.'s Avatar
for competitive benching, like on hwbot.org, no one would run bulldozer with a 7970. the scores would generally be smashed by any sandybridge platform, even if the AMD system was clocked thru the roof - a moderate OC on sandybridge on air would beat a bulldozer on ln2. For the 2600k you mentioned, 3d benches would be very similar, except the ones which include multithreaded CPU tests - SB-E is would rule those rankings.

for non-competitive benchmarking, we'd have to see someone actually run the tests to know how big the difference is.

windwithme did a review recently comparing bulldozer in 3d to an 1100t. he was going to include a sandybridge, but didn't as it would clobber both in most benches. the 1100t beats bulldozer handily in most 3d benches.
Ivy's Avatar
Dont get me wrong, i said it was a cheat on HWbot, not in the review. My stuff was wholely directed to HWbot. My view is, if they just wanna win the race then why not just to delete all dll which does affect settings, or what else. Its simply not honest to provide benches using tweaked settings which does give a advantage over another GPU by software. Finally its very hard for users to see the truth behind those data and it will render itself useless. If he didnt tell me, i would still have no clue that they actually tweak software related stuff.

I never ever was expressing in any known or unknown way that any of the reviews posted on Overclockers was "unclean". Just to get that right.

Now finish with.
I.M.O.G.'s Avatar
You are not familiar with HWBot. HWBot is an overclocking competition platform that defines a set of rules for everyone in the public to participate by - everyone competing in their rankings has the ability to use the same tweaks, if they are knowledgeable of them. They allow almost any tweak possible in their rankings, except those specifically disallowed in the rules. Everyone there taking part, mostly, understands the rules and interprets their rankings as is appropriate. Keep in mind if tweaking is not allowed, then there is no reason to compete - if everyone runs at default settings, everyone would get pretty much the same score. The knowledge of how to get the best scores on a given benchmark is ultimately what separates the average from the greats.

It is their site, they make the rules, everyone who takes part plays by them. It isn't a cheat if they say it is allowed - they allow almost anything, so long as the benchmark software itself is not modified.

Keep in mind, different cards have different capabilities. Drivers have different defaults. In reviews, things are commonly ran at default, or well-informed settings that make the comparison "fair" or as "accurate" as possible. On a site like hwbot, people spend tons of money on hardware and LN2 and exotic voltmods to get the best scores possible - it is more "anything goes" to get "the highest score possible" sort of thing there.

In the end, when a person builds a site and creates an audience, that person gets to make the rules and people will choose if it makes sense to play within those rules. A lot of people think it makes sense to play by hwbot rules. People can disagree with their rules, and they can not take part - they could even call their rules cheating, but that isn't really fair to them to call it cheating.

By the way, I don't intend to argue with you and can respect your perspective. I'm just sharing my perspective on behalf of the site. Sorry to distract further from the 7970 and its performance - this isn't really a thread about hwbot.
Ivy's Avatar
You cant trust the people that the stuff they are telling is true, but finally its his site and his rules sure... i keep out of it because i wouldnt fully trust it.

Lets continue on the main topic i guess.


Well, the 580 GTX is now beaten on Unigine Xtreme aswell, by Aristidis. Considering that there are only 4 cards out there on HWbot of the 7000 series type, i still find it great, no matter what settings actually been used. He was already hitting the 1900 MHZ memory mark, i rather dont wanna guess how much more is actually possible when there is more than 100 of those on HWbot. I dont take the results as fully accurate but it still can deliever a view of its raw possibilitys. What i found fun, he did use a windows classic desktop design so that the GPU will not lose any kind of power to it . Those people definitely are perfectionists. Also fun to see that Hokies SB-E CPU at 5GHz clock is way more powerful than Aristidis 2600K at 5.3 GHz clock but for those benchmarks it doesnt matter, because GPU limited.

Hmm.. juicy: http://www.techpowerup.com/157741/MS...eX-Tested.html
Zarck's Avatar
A test with Folding@home and Radeon 7970 ? possible ?

https://fah-web.stanford.edu/project...ki/BetaRelease

hokiealumnus's Avatar
Yes indeed. I was busy torturing it over the holiday but I'll give that a try this week!
Zarck's Avatar
Thank you for your favorable answer, I look forward to the results.

Zarck
Paris Thursday 5 January 2012 01:01 AM
France.

@+
*_*

https://fah-web.stanford.edu/project...ent/ticket/778
hokiealumnus's Avatar
I thought v.7 was for nvidia only...is that a new beta?

EDIT - Downloading now. We'll see!

EDIT II - Didn't work, post here.
prpz's Avatar
hokie....u lucky......Congrats on a stellar review temp record and all the hard and dedicated work you have done to cover this card.......Im getting my hands on one of these babies within the next 48hrs anybody got a water block for these??? I have a spare loop setup Id luv to toss one under...ideal? I checked the usual suspects no luck yet.....frzcpu....sw...jab...dazm....if you know of a preorder or anything pm pliz
Zarck's Avatar
For Radeon 7970, it is necessary to download folding in the beta private 7.1.42

https://fah-web.stanford.edu/project...ent/ticket/778

It is necessary to join here - >

http://foldingforum.org/viewtopic.php?f=16&t=8

hokiealumnus's Avatar
Yea, I'm not eligible unfortunately.
(Emphasis mine.) I spent many an hour there back in the day but it was always for searching for answers. We had our own folding forum to actually ask and answer the questions. I used to fold extensively for the abit folding team (RIP abit) and had 1.75million points when points were much harder to get than today. Currently, however, it seems I'm not eligible to be a beta tester. Sorry to disappoint. Hopefully they'll bring 7.1.42 to a public beta sometime soon.
skoreanime's Avatar
I can't seem to find a solid answer, but am I correct in thinking all initial launch 7970 are based on the reference design? I plan on water cooling a pair so I just want to make sure before picking them up on monday.
MattNo5ss's Avatar
Most likely, yes. Just look for the AMD logo screen printed on the PCB at the PCIe x16 connector.
repilce's Avatar
I gotta say Hoki, seemed a little like you wanted to bully Nvidia on this review



hokiealumnus's Avatar
I'm sorry it seemed that way. NVIDIA had the king of the kill for a long time. AMD took that crown back with gusto, that's all.
diaz's Avatar
I think its great that people get to cheer for AMD for a very short period of time..
repilce's Avatar
Hah, I loved the review and I'm glad amd is at least on top of something for once again.

Just noticed you pouring it on in good fashion lol.
Leave a Comment