• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

GTX 970 vs 980 WUs

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

RipD

Member
Joined
Jul 8, 2004
Location
Portland, OR
I recently added a GTX 980 to my folder. I saw one with a rebate and got it for ~$400. Relative to another GTX 970 I figured the ppd/$ would be worth it. So far, not so. The issue seems to be WU assignment. In the last 20 hours or so here's what I've seen:

970 WUs: 11402, 9441, 9161, 9658, 11703
980 WUs: 10486, 10494, 10490,

Based on the WUs, my 970 is outperforming the 980. Cards are running at 1390 (970) and 1316 (980).

Are you guys seeing the same thing for 980 WU assignments or is this an anomaly? If Stanford is distributing poorer performing WUs to higher end cards I'll run another 970 rather than a 980.
 
Sounds like luck of the draw with that short of a time span to me. I have 2 970's in the same system and they each have days where they high and low ppd wu's
 
Sounds like luck of the draw with that short of a time span to me. I have 2 970's in the same system and they each have days where they high and low ppd wu's

I'll give a few days and see what happens. My 970 averages 300K per day. Some days are ~275K, some are ~325+. So far my 980 is doing ~300K or less. If can't average over 600K per day I don't see any point in having a 980.

Interesting that the noise to points ratio for this 980 card is also much higher. Apples and organges since it's a completely different card and different cooler, but still. My EVGA 970 is very quiet. Might have to go with a water cooler if I return this 980. Cost is higher, but I likelky get less noise and higher o/c.

Update: now working on a 9135 (970) and 11702 (980) with an estimate of 695K PPD. Looking better...
 
Last edited:
if you need help configuring the slot just post up and we will all help, my 980 make about 400k ppd as set up in the other thread.
 
if you need help configuring the slot just post up and we will all help, my 980 make about 400k ppd as set up in the other thread.

Thanks. Seems to be doing better now with different WUs. After several low performing WUs in a row I was wondering if Stanford was targeting 980s. Now getting close to 400ppd with a 11704 WU.

Tough call on PPD/$ for me with a 980. If I get over 375K per day consistently the 980 is competitive. If not, I'm better off with another 970. Noise wise this card is terrible. I didn't think a non-venting card mattered much - it does. Would prefer one that exhausts heat.

folding4.JPG
 
Last edited:
I use an msi gaming 980, never, ever hear the thing and I run the fans at about 60%, but it's in a box by itself.
 
Not sure this helps, but they range for everyone, I have basically 4 290Xs crunching and they all get different WUs that I'd guess have different algorithm's to complete them. Here's an idea of how much they very even for the same card. Everyone loves pictures.

Capture.JPG

I wish I could get that one particular WU all the time, would really make me feel better about myself :clap:
 
Not sure this helps, but they range for everyone, I have basically 4 290Xs crunching and they all get different WUs that I'd guess have different algorithm's to complete them. Here's an idea of how much they very even for the same card. Everyone loves pictures.

View attachment 177704

I wish I could get that one particular WU all the time, would really make me feel better about myself :clap:
I really wish F@H team would standardize points across all WUs. They way it's done now leads to bad and unproductive behavior, such as people dumping lower point WUs. It's not hard to measure CPU and GPU output. Pick a metric and them multiply power by folding hours. Higer performing processors still get rewarded. But number of WUs and type of WUs would be irrelevant.
 
I think they have different computations for different WU, I have been looking at all the WUs I've been completing and noticing there's no standardization in any of them. I mean WU 1 downloads a 24MB file and when its finished it uploads a 13MB file, then WU 2 downloads a 5 MB file and uploads an 8MB file. Each separate project has its own set of instructions on how it wants something to be calculated, some of it is left over junk and gets erased when it completes, then sometime it's compiled more info and has doubled in size. These were all on completed WU that received WORK_ACK upon completion, cause the failed one only send KB, had some of those also.

I don't know why someone would drop a low point WU, some of them are my best producers, I'd rather run through all the 20-40k WU that give me the highest total (300k+ for AMD cards) output rather than having these single 100k in a single shot, I like to game, and it's always inevitable that I just started one of those monstrous WUs when I got a scratch to itch, and pausing it only hurt your score, got to let them run for best scores. It would be nice if pausing didn't hurt points for that particular WU, so long as it was completed in the time allotted.
 
I don't know why someone would drop a low point WU
Because points = bragging rights and human beings are competitive. Standardizing the points by processing power and time would also stop all the discussion about WUs. If I owned the tool I likely would never show WUs - just points accumulated over time. There's no reason I can see for the client to show what WU is being worked - just that work is getting done and the rate at which points are accumulating.

I was a software developer for a long time. One tenet in some s/w circles is "never let the back end show through to the UI." It generally doesn't add value and just leads to confusion and more support issues.

Edit: spelling
 
Last edited:
Just FYI, based on return estimates I'm averaging 295K PPD and 345K PPD on my 970 and 980 cards. 970 is running at 1392 and 980 at 1370. Could likely get a few more percent improvement out of both. Given those numbers the 970 is ahead on PPD/$.
 
Those sound low for a 980...depending on the work units, here is what I get with my cards:

GTX 960: 141 K to 146 K ppd
GTX 980: 364 K to 443 ppd
GTX 980 Ti: 529 K to 589 K ppd

I have had days where the cards show lower production for the day, but then the next day a huge production...as the work unit finished just after the stroke of midnight.

Check out my "newest creation in progress" thread here...I wrote a program for Windows that will parse your log file to get true statistics for your rigs...not the random PPD number that bounces around in the FAH Client control. I break it down by day, by work unit...and give you "points per hour" that you can use to estimate a 24 hour production. You will see that you get different production rates for different types of cores.

The only issue is that if you shut down and restart F@H (like a system reboot), it blows away the log file...so just remember to make a copy of if it if you want to parse the statistics.
 
Hmmm... I added up point estimates from my log file to get those totals. However, my total points for the last three days have been 653, 732, and 699. So the log file estimates seem to be low. Is there a better way to determine points for each card without shutting one down?

how do you have the 980 slot configured?
No config other that indices. 980 has a GPU index of 1 and a cuda index of 0. 970 has the opposite. Not sure it matters, but I didn't want FAH to be confused about the cards.

Those sound low for a 980...depending on the work units, here is what I get with my cards:

GTX 960: 141 K to 146 K ppd
GTX 980: 364 K to 443 ppd
GTX 980 Ti: 529 K to 589 K ppd

I have had days where the cards show lower production for the day, but then the next day a huge production...as the work unit finished just after the stroke of midnight.

Check out my "newest creation in progress" thread here...I wrote a program for Windows that will parse your log file to get true statistics for your rigs...not the random PPD number that bounces around in the FAH Client control. I break it down by day, by work unit...and give you "points per hour" that you can use to estimate a 24 hour production. You will see that you get different production rates for different types of cores.

The only issue is that if you shut down and restart F@H (like a system reboot), it blows away the log file...so just remember to make a copy of if it if you want to parse the statistics.
Will absolutely check this out. I was thinking over the weekend about building a new folding stats dashboard, including just what you said. Thanks for saving me from going down that path. I have to believe there's a way to fix that file deletion issue. Couple of stupid questions:

1. I assume you're not writing to to the log file and that you have it opened read-only
2. Any way you can make a copy and put it in a working folder before you start reading data?
 
Yeah - it's still a work in progress...:D

It opens the log file for concurrent read, so the F@H app can still write to it. When there is a change in the log file, it fires and consumes an event to parse it.

I can't find a setting for the F@H client that will make it keep the log file.

Check out my program, and add suggestions to that thread...thanks!


 
Back