• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Does Furmark really kill video cards?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

SMOKEU

Member
Joined
Nov 7, 2010
Location
NZ
I've heard that running Kombustor with the "Xtreme burn-in" box ticked, or Furmark may cause permanent damage to video cards by drawing more power than the card was designed to handle. Is this really true or is it just scaremongering? I don't see how you can brick a card just by benching it unless the volts or temps are excessive.
 
I've heard that running Kombustor with the "Xtreme burn-in" box ticked, or Furmark may cause permanent damage to video cards by drawing more power than the card was designed to handle.

That's FUD. That's like saying that Prime95's gonna damage a stock CPU!

That cannot be true unless you're OC'ing and overvolting. Or crappy heatsink..... Or crappy TIM.
 
If its not true, why do both AMD and Nvidia have measures built in their cards to throttle those applications? ;)
 
That's true Nvidia and AMD do throttle benching programs now.

They do it by voltage and speed.
 
Power draw kills 570s and 590s.
Furmark kills cards, it's a proven fact.

Unlike a CPU, a GPU is not designed to be 100% utilized. The thing to keep in mind is that a game that shows 100% utilization is not the same as a math problem that actually fits into the couple k of L2 cache in each shader, most of that 100% utilization in games is spent waiting for data to show up.
That's what makes Furmark (and OCCT GPU) so nasty, the cores aren't waiting for data, they're all doing something.

It's much like how Linpack/IBT gives higher temps than Prime does, Linpack is designed to minimize wait states.

In any case, it is very much not FUD, nor is it entirely heat related, except in that a saturated mosfet or trace generates far more heat than that same mosfet 00.5% less loaded. We're talking a difference along the lines of the mosfet keeping 5w and the mosfet keeping 25w here. Very large.
That's what happens to gtx570 mosfets and gtx590 mosfets, the core(s) demand more currant than the mosfets can cough up, and the extra demand turns into massive, massive heat in the mosfet, much more than can be removed via a standard heatsink. That, in turn, equals BOOM!
 
Power draw kills 570s and 590s.
Furmark kills cards, it's a proven fact.

So it's heat and power draw? :p

I see it as much like the discusion of voltage killing CPUs (in this case voltage = furmark for an analogy). It isn't the voltage that kills the CPU, its the heat generated in the small pockets components that are not being directly cooled, or in other words, the effects caused by the voltage rather then the voltage itself. Much like you say with the 570/590, the mofests/VRMs do not support that kind of wattage and eventually "snap/melt,whatever you wan't to call it" because of it.

I don't mean to get into the voltage discussion, but typically voltage can cause problems only when dealing with voltages in the kilo+ ranges where vdrop is a serious issue that can cause electron migration. I.e. 5000 V to 1000 V or 1000 V to 500 V will cause electron migration. Anything in the 1-5 V range is not going to harm your CPU. Hence why 7 GHz + is achieveable on LN2 or Liquid Helium.


Good info, thanks!
 
I'm upset that Nvidia don't just turn the default clock down to what Nvidia plans to throttle it to anyway. I think Nvidia knows they should be using lower clocks, but just turns them up to make gamers happy.

Then Furmark can't F it up.

I wish Nvidia would just sell them with lower clocks. :mad: :bang head
Then the problem's solved.
 
I'm upset that Nvidia don't just turn the default clock down to what Nvidia plans to throttle it to anyway. I think Nvidia knows they should be using lower clocks, but just turns them up to make gamers happy.

Then Furmark can't F it up.

I wish Nvidia would just sell them with lower clocks. :mad: :bang head
Then the problem's solved.

So you want less performance?!
 
I'm upset that Nvidia don't just turn the default clock down to what Nvidia plans to throttle it to anyway. I think Nvidia knows they should be using lower clocks, but just turns them up to make gamers happy.

Then Furmark can't F it up.

I wish Nvidia would just sell them with lower clocks. :mad: :bang head
Then the problem's solved.

ATI also throttles furmark, and has been doing so for longer than Nvidia. You make it seem like a one colored issue here.

Anyway, as long as the cards aren't throttling during actual use, and they aren't, it's really a non-issue.
 
ATI also throttles furmark, and has been doing so for longer than Nvidia. You make it seem like a one colored issue here.

Anyway, as long as the cards aren't throttling during actual use, and they aren't, it's really a non-issue.

Yeah, that!

If I recall correctly ATI started throttling furmark around the time of the 48xx cards, largely for the same reason Nvidia is doing it now: Furmark's obscenely 3d-like power consumption is a safety hazard.





On temp vs amps:
First a disclaimer, the specific numbers in this example are made up on the spot.
The effect and situation described is real.

Technically, it's always heat that causes the death. Always.
The thing is, if you exceed the maximum amount of amperage that a mosfet can deal with it suddenly goes from something like 99% efficient (only spewing out 1% of the power going through it as heat) to 90, 80, 50, or less.
So this poor little mosfet that is designed to switch 50 amps at 1.3 volts (65w going through it) is used to having to get rid of maybe .65w of heat. Not really that much, but enough that a tiny component should probably have a heatsink.

Now lets say that the absolute maximum amperage that this mosfet can sustain is 55a, that gives us a 10% safety margin, not bad at all.
Now some user finds the voltage slider and the OC slider and cranks it up to 1.4v, and due to that and the OCing this poor unfortunate mosfet is now switching 60a at 1.4v. There's 84w going through it now, but rather then putting out 1% of that (.84w, still not that much) efficiency has fallen through the floor and now it's putting out 30% as heat. That is 25 watts.
The component can't even get 25w to it's surface without overheating, let alone get it to the surface, through the TIM, and into the heatsink.
It quickly goes past it's ~125*c rated maximum temp, the hotter it gets the worse the efficiency gets, it's a positive feedback loop.
Like all positive feedback loops, it ends with destruction, eventually the mosfet looses it internally, and odds are it starts arcing inside.
So it goes POP! And then most likely it gives your GPU core 12v to think about.

Alternatively, your card may be designed such that in 3d loads, even nasty hardcore 3d loads, it's at 50a of 55a allowable, but furmark's math load puts it up to 56a. Pop!
 
mosfet are vary simple there either on or off that's how they adjust voltage and when they are under designed they just blow and you always get some smoke.:shock:

Video cards die with furmark with the GPU chips overheating with voltage and load.:burn:
 
They aren't always just on or off, spend some time reading up on mosfet design.
 
No offend guys! but based on my own experience it is...i have my hd4650 and running for 4 yrs before i decided it to throw away from my rig. one day, i curious about furmark and run it for a more or less 1hour. on the other day i run it again and i have no issue so far but in the 3rd day of doing this the next time i turn my rig my system is freeze and i suspected my ram but when i removed my hd4650 my rig is back to normal so my gpu is broken (i think is the memory of the hd4650 is fry!). Correct me if i mistaken!:rain:
 
I don't know about ATI, but Kepler's power capping should make workarounds (throttle for specific applications) unnecessary, since the GPU now monitors its own current draw, and slows itself down if it goes above the design limit. I believe Intel is doing the same thing now, too.

So if a game is as demanding as Furmark (extremely unlikely), it will be throttled, too. It just doesn't make sense for NVIDIA/ATI (and partners) to spend $20 more on a heatsink of a $50 card just so it can run Furmark at full speed, when no "real world" application can load a card like that.
 
Back