Page 1 of 5 1 2 ... LastLast
Results 1 to 20 of 87
  1. #1
    Member
    Join Date
    Nov 2010
    Location
    NZ

    Does Furmark really kill video cards?

    I've heard that running Kombustor with the "Xtreme burn-in" box ticked, or Furmark may cause permanent damage to video cards by drawing more power than the card was designed to handle. Is this really true or is it just scaremongering? I don't see how you can brick a card just by benching it unless the volts or temps are excessive.
    MY RIG:
    HP 8200 Elite USDT
    i5 2400S
    4GB DDR3 SODIMM
    Intel HD2000 graphics
    135W external PSU
    160GB 2.5" HDD
    Debian 7 KDE

  2. #2
    Member
    10 Year Badge
    Join Date
    May 2004
    Location
    USA (Springfield, Vermont)
    Quote Originally Posted by SMOKEU View Post
    I've heard that running Kombustor with the "Xtreme burn-in" box ticked, or Furmark may cause permanent damage to video cards by drawing more power than the card was designed to handle.
    That's FUD. That's like saying that Prime95's gonna damage a stock CPU!

    That cannot be true unless you're OC'ing and overvolting. Or crappy heatsink..... Or crappy TIM.
    Asus Maximus II Gene- Core 2 Quad Q6600 SLACR @ 3.3 Ghz (367x9.0)

    Where I come from "Z97" is a radio station
    -ICH10R
    - eVGA GeForce GT 640-Antec VP-450


    " holy cow!! you find a rat in there too!?!?!? " -turbohans
    "Reinstall winders." -jivetrky
    "I think I am going to need another coke before I start this up." -cadman420
    "Soon Windows will be 50 gb! lololol" -Tokae
    "NOT FOR SALE IN CALIFORNIA."

  3. #3
    Member BenF's Avatar
    Join Date
    Feb 2005
    Location
    Hell, Michigan
    I remember reading problems with some certain Nvidia cards and Furmark. I don't recall exactly but I believe Nvidia cards now detect if Furmark or OCCT is running and throttles the cards to protect them. I know there are a couple threads about this, I dug up two real quick that might be worth reading if you want to know more.

    http://www.overclockers.com/forums/s...d.php?t=659870
    http://www.overclockers.com/forums/s...d.php?t=660327
    Desktop:
    -Gigabyte z68x-ud3h-b3 - 2500k @4.5 - XFX 7970 @ 1125/1575 - 8Gb ddr3 1333 - 3x samsung spinpoint f1 1tb - intel g2 ssd-
    Laptops:
    Fujitsu Lifebook T Series
    -M540 - 4gb ram - 300gb hdd
    Lenovo T61 (out of commission)
    -t7300-Nvidia nvs130-2gb ram-100gb hdd-14" widescreen
    "Education is an admirable thing, but it is well to remember from time to time that nothing that is worth knowing can be taught." -Oscar Wilde

  4. #4
    Member
    Join Date
    May 2011
    Furmark doesn't kill cards. Heat does.

  5. #5
    Vacationing to find my sanity Mutterator
    Overclockers.com Editor
    First Responders

    EarthDog's Avatar
    Join Date
    Dec 2008
    Location
    Stuck in Maryland...
    Author Profile Benching Profile Folding Profile Heatware Profile
    If its not true, why do both AMD and Nvidia have measures built in their cards to throttle those applications?

    "We have more information and more ways of accessing it than ever, yet seem increasingly less inclined to do so."- Michael Wilbon

  6. #6
    That's true Nvidia and AMD do throttle benching programs now.

    They do it by voltage and speed.
    i5 2500K
    Motherboard Gigabyte Z68A-D3-B3
    G.SKILL RipjawsX X.M.P. 1600MHz
    EVGA SuperClocked GTX 570

  7. #7
    Member
    Join Date
    May 2011
    Quote Originally Posted by EarthDog View Post
    If its not true, why do both AMD and Nvidia have measures built in their cards to throttle those applications?
    Because heat kills cards.

  8. #8
    Senior Member


    Bobnova's Avatar
    Join Date
    May 2009
    Author Profile Benching Profile Folding Profile Heatware Profile Rosetta Profile
    Power draw kills 570s and 590s.
    Furmark kills cards, it's a proven fact.

    Unlike a CPU, a GPU is not designed to be 100% utilized. The thing to keep in mind is that a game that shows 100% utilization is not the same as a math problem that actually fits into the couple k of L2 cache in each shader, most of that 100% utilization in games is spent waiting for data to show up.
    That's what makes Furmark (and OCCT GPU) so nasty, the cores aren't waiting for data, they're all doing something.

    It's much like how Linpack/IBT gives higher temps than Prime does, Linpack is designed to minimize wait states.

    In any case, it is very much not FUD, nor is it entirely heat related, except in that a saturated mosfet or trace generates far more heat than that same mosfet 00.5% less loaded. We're talking a difference along the lines of the mosfet keeping 5w and the mosfet keeping 25w here. Very large.
    That's what happens to gtx570 mosfets and gtx590 mosfets, the core(s) demand more currant than the mosfets can cough up, and the extra demand turns into massive, massive heat in the mosfet, much more than can be removed via a standard heatsink. That, in turn, equals BOOM!
    "Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe." -- Einstein (maybe)

    Thinking about an Asus motherboard? Think again.

    How to check your PSU with a multimeter.

    17bXw5t51rEBXGavJFMJsC8g7HQgThUGc7

  9. Thanks!

    BenF (05-26-11), Domino (05-26-11), EarthDog (05-26-11), ZapTap (07-26-14)

  10. #9
    Member
    Join Date
    May 2011
    Quote Originally Posted by Bobnova View Post
    Power draw kills 570s and 590s.
    Furmark kills cards, it's a proven fact.
    So it's heat and power draw?

    I see it as much like the discusion of voltage killing CPUs (in this case voltage = furmark for an analogy). It isn't the voltage that kills the CPU, its the heat generated in the small pockets components that are not being directly cooled, or in other words, the effects caused by the voltage rather then the voltage itself. Much like you say with the 570/590, the mofests/VRMs do not support that kind of wattage and eventually "snap/melt,whatever you wan't to call it" because of it.

    I don't mean to get into the voltage discussion, but typically voltage can cause problems only when dealing with voltages in the kilo+ ranges where vdrop is a serious issue that can cause electron migration. I.e. 5000 V to 1000 V or 1000 V to 500 V will cause electron migration. Anything in the 1-5 V range is not going to harm your CPU. Hence why 7 GHz + is achieveable on LN2 or Liquid Helium.


    Good info, thanks!

  11. #10
    Member
    10 Year Badge
    Join Date
    May 2004
    Location
    USA (Springfield, Vermont)
    I'm upset that Nvidia don't just turn the default clock down to what Nvidia plans to throttle it to anyway. I think Nvidia knows they should be using lower clocks, but just turns them up to make gamers happy.

    Then Furmark can't F it up.

    I wish Nvidia would just sell them with lower clocks.
    Then the problem's solved.
    Asus Maximus II Gene- Core 2 Quad Q6600 SLACR @ 3.3 Ghz (367x9.0)

    Where I come from "Z97" is a radio station
    -ICH10R
    - eVGA GeForce GT 640-Antec VP-450


    " holy cow!! you find a rat in there too!?!?!? " -turbohans
    "Reinstall winders." -jivetrky
    "I think I am going to need another coke before I start this up." -cadman420
    "Soon Windows will be 50 gb! lololol" -Tokae
    "NOT FOR SALE IN CALIFORNIA."

  12. #11
    Member
    Join Date
    Nov 2010
    Location
    NZ
    Quote Originally Posted by RJARRRPCGP View Post
    I'm upset that Nvidia don't just turn the default clock down to what Nvidia plans to throttle it to anyway. I think Nvidia knows they should be using lower clocks, but just turns them up to make gamers happy.

    Then Furmark can't F it up.

    I wish Nvidia would just sell them with lower clocks.
    Then the problem's solved.
    So you want less performance?!
    MY RIG:
    HP 8200 Elite USDT
    i5 2400S
    4GB DDR3 SODIMM
    Intel HD2000 graphics
    135W external PSU
    160GB 2.5" HDD
    Debian 7 KDE

  13. #12
    Member ratbuddy's Avatar
    Join Date
    Aug 2007
    Location
    Hartford, CT
    Heatware Profile
    Quote Originally Posted by RJARRRPCGP View Post
    I'm upset that Nvidia don't just turn the default clock down to what Nvidia plans to throttle it to anyway. I think Nvidia knows they should be using lower clocks, but just turns them up to make gamers happy.

    Then Furmark can't F it up.

    I wish Nvidia would just sell them with lower clocks.
    Then the problem's solved.
    ATI also throttles furmark, and has been doing so for longer than Nvidia. You make it seem like a one colored issue here.

    Anyway, as long as the cards aren't throttling during actual use, and they aren't, it's really a non-issue.
    HTPC - 2500k - 212+ - GA-Z68MX-UD2H-B3 - 2x4GB G.Skill DDR3-1600 - Crucial MX100 512GB, Spinpoint F3 1TB w/M4 64GB ISRT Cache
    MSI GTX 970 4GB - Silverstone LC10B-E - Corsair RM550

    -----
    Main - X3 450 - ASRock A790GMH/128M 790GX - 2x2GB G.Skill 4-4-4-12 - Crucial MX100 256GB, 2xWD Green 1TB
    Gigabyte GTX 460 1GB - Silverstone TJ08 - Corsair CX400W

    Nothin' up my sleeve..

  14. Thanks!

    Bobnova (06-03-11), EarthDog (05-26-11)

  15. #13
    Senior Member


    Bobnova's Avatar
    Join Date
    May 2009
    Author Profile Benching Profile Folding Profile Heatware Profile Rosetta Profile
    Quote Originally Posted by ratbuddy View Post
    ATI also throttles furmark, and has been doing so for longer than Nvidia. You make it seem like a one colored issue here.

    Anyway, as long as the cards aren't throttling during actual use, and they aren't, it's really a non-issue.
    Yeah, that!

    If I recall correctly ATI started throttling furmark around the time of the 48xx cards, largely for the same reason Nvidia is doing it now: Furmark's obscenely 3d-like power consumption is a safety hazard.





    On temp vs amps:
    First a disclaimer, the specific numbers in this example are made up on the spot.
    The effect and situation described is real.

    Technically, it's always heat that causes the death. Always.
    The thing is, if you exceed the maximum amount of amperage that a mosfet can deal with it suddenly goes from something like 99% efficient (only spewing out 1% of the power going through it as heat) to 90, 80, 50, or less.
    So this poor little mosfet that is designed to switch 50 amps at 1.3 volts (65w going through it) is used to having to get rid of maybe .65w of heat. Not really that much, but enough that a tiny component should probably have a heatsink.

    Now lets say that the absolute maximum amperage that this mosfet can sustain is 55a, that gives us a 10% safety margin, not bad at all.
    Now some user finds the voltage slider and the OC slider and cranks it up to 1.4v, and due to that and the OCing this poor unfortunate mosfet is now switching 60a at 1.4v. There's 84w going through it now, but rather then putting out 1% of that (.84w, still not that much) efficiency has fallen through the floor and now it's putting out 30% as heat. That is 25 watts.
    The component can't even get 25w to it's surface without overheating, let alone get it to the surface, through the TIM, and into the heatsink.
    It quickly goes past it's ~125*c rated maximum temp, the hotter it gets the worse the efficiency gets, it's a positive feedback loop.
    Like all positive feedback loops, it ends with destruction, eventually the mosfet looses it internally, and odds are it starts arcing inside.
    So it goes POP! And then most likely it gives your GPU core 12v to think about.

    Alternatively, your card may be designed such that in 3d loads, even nasty hardcore 3d loads, it's at 50a of 55a allowable, but furmark's math load puts it up to 56a. Pop!
    "Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe." -- Einstein (maybe)

    Thinking about an Asus motherboard? Think again.

    How to check your PSU with a multimeter.

    17bXw5t51rEBXGavJFMJsC8g7HQgThUGc7

  16. #14
    mosfet are vary simple there either on or off that's how they adjust voltage and when they are under designed they just blow and you always get some smoke.

    Video cards die with furmark with the GPU chips overheating with voltage and load.
    i5 2500K
    Motherboard Gigabyte Z68A-D3-B3
    G.SKILL RipjawsX X.M.P. 1600MHz
    EVGA SuperClocked GTX 570

  17. #15
    Senior Member


    Bobnova's Avatar
    Join Date
    May 2009
    Author Profile Benching Profile Folding Profile Heatware Profile Rosetta Profile
    They aren't always just on or off, spend some time reading up on mosfet design.
    "Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe." -- Einstein (maybe)

    Thinking about an Asus motherboard? Think again.

    How to check your PSU with a multimeter.

    17bXw5t51rEBXGavJFMJsC8g7HQgThUGc7

  18. #16
    Glorious Leader I.M.O.G.'s Avatar
    10 Year Badge
    Join Date
    Nov 2002
    Location
    Rootstown, OH
    Author Profile Benching Profile Folding Profile Heatware Profile
    Nice explanation Bob, thanks.

  19. #17
    Mobo Cooking Member
    Join Date
    Jan 2008
    Ditto Bobnova, that helps a lot.
    Q6600 @ 3.6 Ghz 1.45V TRUE
    2x2 GB @ 1066 Mhz GSkill 5-5-5-15 @ 1.9 V
    Radeon 6870
    Maximus Formula II

    Q6600 @ 3.2 Ghz 1.35V AC freezer pro 7
    4 GB Patriot 800 Mhz DDR2 5-5-5-16 1.92V
    ATI x1950 256MB
    D975xbx2
    My heatware

  20. #18
    New Member
    Join Date
    Jun 2012

    Cool

    No offend guys! but based on my own experience it is...i have my hd4650 and running for 4 yrs before i decided it to throw away from my rig. one day, i curious about furmark and run it for a more or less 1hour. on the other day i run it again and i have no issue so far but in the 3rd day of doing this the next time i turn my rig my system is freeze and i suspected my ram but when i removed my hd4650 my rig is back to normal so my gpu is broken (i think is the memory of the hd4650 is fry!). Correct me if i mistaken!

  21. #19
    Member
    Join Date
    May 2008
    Location
    Vancouver, BC
    Heatware Profile
    I don't know about ATI, but Kepler's power capping should make workarounds (throttle for specific applications) unnecessary, since the GPU now monitors its own current draw, and slows itself down if it goes above the design limit. I believe Intel is doing the same thing now, too.

    So if a game is as demanding as Furmark (extremely unlikely), it will be throttled, too. It just doesn't make sense for NVIDIA/ATI (and partners) to spend $20 more on a heatsink of a $50 card just so it can run Furmark at full speed, when no "real world" application can load a card like that.
    Heatware

    Disclaimer: I was an NVIDIA employee, so whenever I post about GPUs, I am probably biased.

    System:
    CPU - Intel i5 2500K @ 4.7GHz, cooled by Hyper 212+
    Mobo - MSI Z68A-G43 (G3)
    RAM - 2*4GB, DDR3-1600
    Video cards - 2x GTX 560 Ti SLI
    Storage - 128GB Crucial C300 SSD
    PSU - 600W Corsair builder series

  22. #20
    Disabled
    Join Date
    Jan 2009
    Location
    Clearwater FL
    Folding Profile
    Hmm, really I'd never even tried it out till a few days ago but it ran fine here, I'll dump that I guess.

Page 1 of 5 1 2 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •