- Joined
- Jan 24, 2011
- Location
- Zebulon, North Carolina
Let's set up a theoretical Bitcoin mining rig, which will also be used as a semi-primary gaming computer, Said theoretical machine does have a small bit of relativity to myself, but mostly i am curious too, so onto said build. (For those who don't know what bitcoin mining is, think: Folding@Home type stress on your GPU), We all know and are aware that heat is what creates the perfect environment for chip degradation, is this the same influence despite it being it's design?
Intel Core i7 2600k
4GB DDR3 2133 RAM
1200 watt Corsair PSU
Radeon HD 5970 @ 735/1010 Stock Voltage/stock cooler
[Crossfire]
Radeon HD 5970 @ 735/1010 Stock voltage/stock cooler
Now we have our system above, we do not overclock the GPU's, this load, without user defined or manual fan speeds puts you close to 85c in a 25c environment, these cypress chips are known for there amount of heat output so this is perfectly normal unless you're on water, but in this example we are sticking with the stock cooler and bumping the fan up to 60% to get around 66-70C max tamps under this stressful load.
With temps considered and our set up displayed in full detail here, the question that i wonder about, Does the stress over a long period of time (say 1-2 years 24/7 on-time) degrade these chips, or rather, is there any proof or speculation about this anywhere?
My Thoughts are: 1.It's hard to find a very good GPU that last's several years and can last even more. 2. It would be a shame to have invested $1000+ in video cards to basically be waring them out. 3. They are OEM with no warranty. 4. I do not have the money to replace these cards anytime soon.
I'm interested to see what everyone thinks on this, feel free to leave any of your thoughts, speculations and facts. One of the reasons i am asking is because alot of threads on the bitcoin subject talk about there systems dying with-in several months of 24/7 bitcoin mining, apparently heat was never the issues, the cards just ended up dying from being used 24/7 with 50-70% fan speeds, general safe temperatures, mild to no overclocking.
Intel Core i7 2600k
4GB DDR3 2133 RAM
1200 watt Corsair PSU
Radeon HD 5970 @ 735/1010 Stock Voltage/stock cooler
[Crossfire]
Radeon HD 5970 @ 735/1010 Stock voltage/stock cooler
Now we have our system above, we do not overclock the GPU's, this load, without user defined or manual fan speeds puts you close to 85c in a 25c environment, these cypress chips are known for there amount of heat output so this is perfectly normal unless you're on water, but in this example we are sticking with the stock cooler and bumping the fan up to 60% to get around 66-70C max tamps under this stressful load.
With temps considered and our set up displayed in full detail here, the question that i wonder about, Does the stress over a long period of time (say 1-2 years 24/7 on-time) degrade these chips, or rather, is there any proof or speculation about this anywhere?
My Thoughts are: 1.It's hard to find a very good GPU that last's several years and can last even more. 2. It would be a shame to have invested $1000+ in video cards to basically be waring them out. 3. They are OEM with no warranty. 4. I do not have the money to replace these cards anytime soon.
I'm interested to see what everyone thinks on this, feel free to leave any of your thoughts, speculations and facts. One of the reasons i am asking is because alot of threads on the bitcoin subject talk about there systems dying with-in several months of 24/7 bitcoin mining, apparently heat was never the issues, the cards just ended up dying from being used 24/7 with 50-70% fan speeds, general safe temperatures, mild to no overclocking.
Last edited: