• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Which 970 Should I get.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Power limit which card sees is not max card's wattage. This is just weird but really you have to set much higher power limit to keep these cards stable. Like you have to set ~250W to keep it stable at about 200W max under load without throttling. It's related to ASIC and power/voltage tables and on every card it's a bit different.

6 pin = 8 pin power connector in max wattage if you modify rails. It also requires single 12V rail PSU.

The same rules are for all GTX900 series regardless if it's GTX960 or GTX980Ti. The only difference is card's behaviour at higher voltage and higher clocks. There are also differences in scalling at above stock voltages because of used voltage controllers. In general, lower series are overclocking better at the same voltage but it can be related to internal chip temperature.

Overvoltage is also barely helping on GTX970. Without mods it's ~1.12-1.21V in 3D mode. Boost is then ~1.27V max. Software overvoltage always ends at 1.27V so if you have card with software that says +60mV then 3D mode is probably 1.21V, when it has +100mV then 3D mode is ~1.17V. Boost voltage is never stable and can be measured only with multimeter. BIOS values are not matching this value after overvoltage as it depends from other factors.

Difference between 1.27 and 1.30V is barely visible and above that most cards lose stability. On all cards which I was testing, OC gain was about 30-50MHz max after overvolting. Max clock depends more from card's temperature than voltage. Using 1.27V you can reach 1700MHz+ only when you drop temps. On GTX970 it's below 0*C. On GTX960 it's above 0*C. On GTX980 it highly depends from card's version as ASUS requires much higher voltage so also requires better cooling but max clock is about as good as on any other cards.
 
I'm not an electrical engineer, so I'm an idiot when it comes to power consumption from the wall, how much power something uses in conjunction to the Y axis of the moon's gravitational pull on Jupiter with a storm trooper on a free return trajectory to the sun, some other gibberish.....

What I CAN tell you is this: At the clocks and voltage I'm running, with my TPD set to 330 watts, GPUz tells me I'm using like 88.4% of it. AND, if I set my power limits in the bios to less than 290 watts (it's actually a little bit higher than that), I hit a power limit perf cap using those voltages and settings. I think the testing I had done on it initially was at 1.275v, 1605 on the core and 2050 on the memory. And that was with the 80.8% ASIC quality card. The 73.3% ASIC quality card was measurably worse....which is why I went ahead and set the power limits and TDP to 330 watts, instead of just 300....to make sure I had the extra wiggle room. I like wiggle room.....

lol
Me either...but I know at the wall, you need to take into account the PSU's efficiency, which is why I used the .9 (90%) value and subtracted it.

Anyway, I am a betting man, and I bet good money says at 1.275v and ~1550 MHz clocks, there is no way you are pulling ~292W from the card alone. My entire SYSTEM pulls that much, not just the GPU with similar clocks but a hair less voltage.


EDIT: Thanks Woomack. :)
 
Last edited:
I'm not an electrical engineer, so I'm an idiot when it comes to power consumption from the wall, how much power something uses in conjunction to the Y axis of the moon's gravitational pull on Jupiter with a storm trooper on a free return trajectory to the sun, some other gibberish.....

What I CAN tell you is this: At the clocks and voltage I'm running, with my TPD set to 330 watts, GPUz tells me I'm using like 88.4% of it. AND, if I set my power limits in the bios to less than 290 watts (it's actually a little bit higher than that), I hit a power limit perf cap using those voltages and settings. I think the testing I had done on it initially was at 1.275v, 1605 on the core and 2050 on the memory. And that was with the 80.8% ASIC quality card. The 73.3% ASIC quality card was measurably worse....which is why I went ahead and set the power limits and TDP to 330 watts, instead of just 300....to make sure I had the extra wiggle room. I like wiggle room.....

lol


That's a shame...because it's electrical engineering applying the laws of physics that come up with this stuff! Last time I checked, Ohm's Law was still valid! :)

Do you have your PC running through a UPS that displays power sourced?
 
Interesting. The max I've been able to get out of my 970s was on a really cool morning with ambient temp in the room at 19c. The coolant a nice and chilly 22c. That run was 1633 on the core, and I ran it 3 times. Previously, I had only been able to do 1625. I knew the cooler you kept them, the higher you could run them. But I wasn't sure where that line was where it stopped helping, and started being a hinderance. This considering I've never been able to make anything above 1.275v do ANYTHING for my cards at all....it doesn't even raise the temps....

Good info

- - - Updated - - -

No...though I've thought about it. The grid out where I live, in the sticks, isn't all that reliable. I don't trust those "killawatt" things either. Some guy tried to tell me the other day he runs his 4790k and two 980tis and only pulls like 400 watts from the wall according to his "killawatt" reading. I told him to go ahead and buy a 400 watt power supply and tell me how that worked out for him. lol

- - - Updated - - -

Me either...but I know at the wall, you need to take into account the PSU's efficiency, which is why I used the .9 (90%) value and subtracted it.

Anyway, I am a betting man, and I bet good money says at 1.275v and ~1550 MHz clocks, there is no way you are pulling ~292W from the card alone. My entire SYSTEM pulls that much, not just the GPU with similar clocks but a hair less voltage.


EDIT: Thanks Woomack. :)

Then what is GPUz telling me? 88% of 330 is....290 watts.
 
No...though I've thought about it. The grid out where I live, in the sticks, isn't all that reliable. I don't trust those "killawatt" things either. Some guy tried to tell me the other day he runs his 4790k and two 980tis and only pulls like 400 watts from the wall according to his "killawatt" reading. I told him to go ahead and buy a 400 watt power supply and tell me how that worked out for him. lol
They are accurate enough to know that I am not pulling 292W from a GTX 970 alone at those clocks + voltages, LOL!

That guy running SLI, I would expect average use on something a system like that to be around 550-600W, even though the cards are 250W each... Or maybe he didn't have SLI enabled or............. too many variables to list to even address that point at this time...

Then what is GPUz telling me? 88% of 330 is....290 watts.
Read what Woomack posted. :)
 
I did...it doesn't explain 330 x .88 = not 290

So, you're reading off of a thing on the wall that says your system is pulling X amount of watts from the wall, and then dividing it out in your head somehow to come up with, "this guy doesn't know what he's talking about". o_O Hmmm..... If I were a lesser man, I may be offended.
 
Lol, the system wattage is from that device, yes. The number I am multipling by is a KNOWN efficiency value of my psu in that wattage range.


Grab a killawatt.. or test with the right equipment and report back. Honus is on you since what you are saying goes against conventional wisdom. My killawatt shows strikingly similar results compared to those with text right tools. I certainly could be wrong, but at the same time am not convinced with the evidence presented. :)
You shouldn't feel insulted...lesser man or not. :)
 
Well...most I can do atm, is run the tests tonight and show that at 100% on the power limit slider, I'll hit a perf cap at 1605 / 2100 with the voltage at 1.275v and the power limit set to 290 at 100%. And then run the test again, same settings with the power limit slider at 114%, or 330 watts, and it won't hit a power limit perf cap any more.

It'll also show the TDP usage in the GPUz sensors tab at 88%. The TDP in the bios is set to 330 watts. Now, outside of some magical, mystical electrical engineer, physics professor super secret formula, that = 290. lol

I'll be home from work in a couple hours and will run it all again. = )

Something just hit me....you said a known efficiency rating from your PSU. I run an EVGA 1000P2. Platinum certified. That's what...90% (.9)? More parts for the mystical formula? lol

Honestly...it doesn't matter to me what it's pulling from the wall. It matters to me what it needs in the bios in order to run properly. I don't care how many trees / fish / jackrabbits / astronauts it's killing.... lol
 
I believe you...don't bother reposting anything as your results aren't in question so much as interpreying then. Look at what Woomack posted, then look around at reviews that show power consumption.

TPU measures at the card.. look at this MSI.. in FURMARK, the thing pulled 213W actual when overclocked.. in gaming, 168W. This is higher than reference because the card is factory overclocked. It just seems impossible to have another 125w on top of that to reach 290+

https://www.techpowerup.com/mobile/reviews/MSI/GTX_970_Gaming/25.html

As far as where YOU read your power from, if it's not from the wall, you don't take away the efficiency of the psu.
 
Yeah....we're not talking about a 1400mhz factory overclock here either. Most of these guys are pushing above 1600 on the core and 2100 on the memory, as I am.

Regardless...I just ran through and all the testing I did the other day seems to have evaporated, because today I'm getting a power limit perf cap at 1605 / 2050 with the power limit set at 330 watts. I have more work to do. Which just baffles me, because yesterday I ran Firestrike through at 1633 / 2176 and didn't hit the perf cap. /boggle
 
I get the clocks you are at. What I am not explaining well enough is that it, AFAIK, isn't possible to raise ~135w with .055v more than that link... including 200mhz difference in clocks. Clocks don't raise power use much, particularly when compared to voltage.

Btw, 2176 on the ram is insane... did you add voltage there?
 
I know...and it's Elpida too. It REALLY likes the cold.... No extra voltage there though...not even sure how you'd do that. And trust me...if I could figure out how to raise the voltage on the memory to get it more stable on the Unigine benchmarks at those clocks, I'd do it. Unigine is hard on memory...it loves the high clocks, but it'll only run them at 2100 without starting to artifact.

And that's only on the 80% ASIC card too...same memory on the other one, Elplida, but it'll only run 2100 in FS, and around 2088 in Unigine without doin the funky chicken on the screen. It really limits my SLI runs.

- - - Updated - - -

Oh, I was talking to a few of the guys on the OCN forums. They led me to this. Looks like Maxwell pulls a LOT more power on the microsecond scale than advertised.

 
Great link!!! That, for all intents and purposes, makes me do an about face!

Here is my question... if those results are true across the board, why do so many cards (most) NOT throttle at stock speeds? Do all have a raised TDP in the bios? With all the 970's we reviewed here, none of us (Dino and I) were close to the power limit. We didn't see perf cap/powr limit/nothing in our reviews. A few of the when overclocked and adding voltage... you would play the game with voltage, clocks, and hitting that TDP. It comes to reason that, NVIDIA has loosened things up a bit on the AIBs as it seems to get around some of these microsecond breaches of listed TDP, they have raised their wattage to, in some cases, A LOT over the 145W rating.

EDIT: So I looked up the BIOS on the GTX 970 Extreme and found that, its default TDP here is 300W @ 100% and 366W @ 22%!!! My apologies for not believing that Giga was 260. I surely believe it now, LOL!
 
The TDP table is different than the power limit table. The TDP is the top table, and the power limit table is the 3rd from the bottom. At least, on the GM204 bios. The GM200 bios may be a tad different. I don't mess with them.... The PWR perf cap reason would kick in on the ACX 2.0 cards at just a little above the stock clocks, because the power limit in the power table was only set to 187 watts at 110%. The ACX 2.0+ cards are set to 230 watts in the power limit table, iirc, and can overclock a bit more without hitting them. The G1 and a few others are set higher yet. They still need more on the power limit table to keep them from hitting the PWR perf cap at higher clocks though. Usually around 1550 or so is where you need to start watching it really closely. On the ACX 2.0 cards, it was only about 1490-1500. At least on mine, that was the case. There's a guy right now in the 970 group on OCN that's battling with the power limit issue, and he can only get up to about 1480 before hitting it at the stock bios levels, and he has the SSC ACX 2.0+....his ASIC quality is VERY poor though. I think it's like 61%...so it's chewing through voltage pretty fast.

I finally got my power limits sorted out again. I gave up trying to keep them as low as possible after this latest run in with them, so I set the max in the power limit table to 379, pulling 150 from each 6 pin and 82 from the PCIe slot. Cleared it up again....at least for now. We'll see how that pans out in the days to come. I'll mess around with it more over the long weekend.

Still confused as to how a day ago, I get that awesome run in Firestrike, and with the exact same bios today, I get a power limit at clocks that are every day stable (1605 / 2100). I'm baffled on that.....

I think NVIDIA wanted so bad for Maxwell to be "low power" that they hampered them with artificially low power limits. I mean, at stock clocks they run perfectly fine, and even competitive, but.....if they hadn't been so sold on "low power", this thought that has that the 390 is equal to and superior in most cases compared to the 970, wouldn't even be a conversation that came to people's minds. /shrug

Those same restrictions, were the reason for the EVGA revisions...ACX 2.0+ happened because of it. That's my thoughts anyway.

Proud of this one....it took a lot of playing to get that done. = )

nn0BlSN.jpg
 
Last edited:
OK Vellinious...that article TOTALLY satisfied the geek in me!

The average power draw is not 300 W+, the instantaneous power draw is. That makes sense. Apologies.

I would not have expected the graphics card to draw such large current spikes off the power supply rails. This is generally a bad design philosophy...a good design philosophy would have the card filter its own power draw spikes to avoid putting that kind of chaos against a power supply (similar to the way modern power supplies filter their own current spikes against the AC line). This chaos incorporates into excess EMI noise that will be radiated through the 12 V power supply connector cables.

It is possible that even though you are cooling the card well, you are also getting temperature increases on the microsecond scale as well. It all depends on how NVIDIA is measuring the temperature (multiple ways of doing this.) Is there an adjustment in the bios to increase for temperature spikes like the power spikes (i.e. a higher "spike" temperature)? If there is not, your only other options would be to:

1. Reseat the heat sink on the GPU
2. Add more thermal mass to the GPU heat sink (i.e. a bigger hunk of copper)
 
OK Vellinious...that article TOTALLY satisfied the geek in me!

The average power draw is not 300 W+, the instantaneous power draw is. That makes sense. Apologies.

I would not have expected the graphics card to draw such large current spikes off the power supply rails. This is generally a bad design philosophy...a good design philosophy would have the card filter its own power draw spikes to avoid putting that kind of chaos against a power supply (similar to the way modern power supplies filter their own current spikes against the AC line). This chaos incorporates into excess EMI noise that will be radiated through the 12 V power supply connector cables.

It is possible that even though you are cooling the card well, you are also getting temperature increases on the microsecond scale as well. It all depends on how NVIDIA is measuring the temperature (multiple ways of doing this.) Is there an adjustment in the bios to increase for temperature spikes like the power spikes (i.e. a higher "spike" temperature)? If there is not, your only other options would be to:

1. Reseat the heat sink on the GPU
2. Add more thermal mass to the GPU heat sink (i.e. a bigger hunk of copper)

My card peaked at 32c at 1.275v, on that 1633 / 2176 run. I don't think temps are an issue. For air cooling, I could see the microsecond jumps in temp to cause a lot of stability with higher overclocks. Maybe....maybe that's why Maxwell seems to run SOOO much better the cooler you keep it? I mean, all electronics are that way, but it seems that Maxwell has taken that and exaggerated it to ridiculous proportions....
 
My card peaked at 32c at 1.275v, on that 1633 / 2176 run. I don't think temps are an issue. For air cooling, I could see the microsecond jumps in temp to cause a lot of stability with higher overclocks. Maybe....maybe that's why Maxwell seems to run SOOO much better the cooler you keep it? I mean, all electronics are that way, but it seems that Maxwell has taken that and exaggerated it to ridiculous proportions....

Yeah - the temperature you are measuring is the average temperature. As Woomack stated above, the cards like cold for heavy overclock.

The power spikes will cause temperature spikes...they go hand in hand. You would have to go to elaborate means to measure the milli to microsecond temperature spikes...just as the guys did for the power draw in the link you provided.

(If you're a major geek and have an IR camera, you could spot the hot spots and apply the moving air to them! I'll have to take a picture of my 970s to see where the hot spots are.)

Your cooling will keep your average temperature low, not sure what it will do to the higher speed temperature changes. It all depends on how well the chip is laid out to get the high heat generation areas to the main package. Like on my 5820K, the core temperatures move way quicker than the die/package temperature. The die is a larger thermal mass than the core, so that makes sense.

There are also many things within the board design that could affect stability at various temperatures. For example:

1. Capacitors on the board: the capacitance value is a function of temperature. For low ESR electrolytic capacitors (used in power sections and VRMs), the ESR is also a function of temperature. High current spikes (i.e. power) through these devices will heat them up quickly.

2. FETS on the board: the FET channel resistance is a function of temperature and current through the FET.

3. Inductors/Coils/Chokes: these devices will have their values change when you start to hit the maximum current through them (due to magnetic field saturation). Additionally, as you run high frequency energy through these, the core material will heat up (for both ceramic and regular metal cores.) As they heat up, they become less effective at passing the magnetic energy.

There are more, but those are probably the primary driving affects.

A long winded way of saying that there is more to pushing the envelope than just keeping the GPU and memory cool.

You could try experimenting with placing a fan or two directly blowing at different areas of the graphics card and see if it improved stability...this extra air flow will lower temperatures in the areas I mentioned above.
 
Yeah - the temperature you are measuring is the average temperature. As Woomack stated above, the cards like cold for heavy overclock.

The power spikes will cause temperature spikes...they go hand in hand. You would have to go to elaborate means to measure the milli to microsecond temperature spikes...just as the guys did for the power draw in the link you provided.

(If you're a major geek and have an IR camera, you could spot the hot spots and apply the moving air to them! I'll have to take a picture of my 970s to see where the hot spots are.)

Your cooling will keep your average temperature low, not sure what it will do to the higher speed temperature changes. It all depends on how well the chip is laid out to get the high heat generation areas to the main package. Like on my 5820K, the core temperatures move way quicker than the die/package temperature. The die is a larger thermal mass than the core, so that makes sense.

There are also many things within the board design that could affect stability at various temperatures. For example:

1. Capacitors on the board: the capacitance value is a function of temperature. For low ESR electrolytic capacitors (used in power sections and VRMs), the ESR is also a function of temperature. High current spikes (i.e. power) through these devices will heat them up quickly.

2. FETS on the board: the FET channel resistance is a function of temperature and current through the FET.

3. Inductors/Coils/Chokes: these devices will have their values change when you start to hit the maximum current through them (due to magnetic field saturation). Additionally, as you run high frequency energy through these, the core material will heat up (for both ceramic and regular metal cores.) As they heat up, they become less effective at passing the magnetic energy.

There are more, but those are probably the primary driving affects.

A long winded way of saying that there is more to pushing the envelope than just keeping the GPU and memory cool.

You could try experimenting with placing a fan or two directly blowing at different areas of the graphics card and see if it improved stability...this extra air flow will lower temperatures in the areas I mentioned above.

lol, I did that already...I mounted 2 140mm fans right above the cards blowing air down onto them. My motherboard is mounted horizontally, so with the fans in the top of the case blowing air down onto them, it helped to keep the memory cooler. Thus the 2176 memory clock on the firestrike runs. It didn't help the core clocks though.

I do need to get a laser thermometer though...I'd like to see how well the water block is doing on the VRMs. That's always worried me a little bit. A 4+2 power phase with 1.275v going through it. I don't think it's that big a deal, but..... eh. I've thought about getting the really good thermal pads for placement between the water block and the memory / vrm, but, I'm just not sure if it'd help all that much. /shrug

I may try to hang a fan down and blow it on the back of the PCB....see if it helps a bit. I'd just love to see 1650 on the core for a firestrike run or 3. = P
 
Back