• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Only 100k ppd out of 1070???

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

AnubisOne

Member
Joined
Dec 3, 2012
Location
Cincinnati, OH
I just recently added a 1070 to my folding rig and its only getting 100k ppd?? It ran at an estimated 600k ppd for one work unit, then after that it has been stable at estimated 100k ppd.

Is this coming from the PCI slot running in x4 mode? I have a 970 running over 300k ppd right now and GPU-z is telling me its PCI slot is running in x1 mode.
 
I just recently added a 1070 to my folding rig and its only getting 100k ppd?? It ran at an estimated 600k ppd for one work unit, then after that it has been stable at estimated 100k ppd.

Is this coming from the PCI slot running in x4 mode? I have a 970 running over 300k ppd right now and GPU-z is telling me its PCI slot is running in x1 mode.

Which version of the driver are you using? If you just downloaded it from Nvidia's page, it needs to be the 376.48 hotfix which isn't in the standard list. Less than that down to about 373 have a problem with folding. You should be getting at least 500K with that card, up to 750K. Settings for the slot are "client-type advanced" and "max-packet-size big" ? Also, do you have a passkey entered in the client?
 
I am running the 376.48 drivers and I do have a passkey entered. I just added the slot settings, hoping to see a change in the next work units.
 
I am running the 376.48 drivers and I do have a passkey entered. I just added the slot settings, hoping to see a change in the next work units.

Good luck! Those are the most common issues, but it might be something else. And if it is running at 4x, you might get less ppd than 8x or above, but nowhere near as low as 100K.
 
If you entered the passkey for the first time on an account, it takes 10 successful complete WU to get the bonus.
 
So just started the next work unit and it rose to 210k ppd, but no more. I'm thinking of changing it from the 3rd PCIe slot to the 1st slot. I feel like that shouldn't make any difference though...

Is it my hardware other than the GPUs? This was put together when I didnt have much funds to put toward it so I used the cheapest I could find. It was first put together to run quad 750tis. Can the motherboard handle running three GPUs like these?
 
Are you folding on the CPU with this machine? If so, try removing the CPU slot and see how it does.
 
Some of my ASUS motherboards boards default to x1 speed on the 2nd x16 slot and have to be set in bios to use x4 speed. perhaps ASRock has similar issue.
 
Are you folding on the CPU with this machine? If so, try removing the CPU slot and see how it does.
No I removed the cpu slot, but I've noticed the usage is still between 70-80%. All that is running is Team Viewer which I use for remote access, Hardware Monitor, and F@H.

Some of my ASUS motherboards boards default to x1 speed on the 2nd x16 slot and have to be set in bios to use x4 speed. perhaps ASRock has similar issue.
I'll have to check the bios next time I go to the house.

Edit: I think FAH is reading my cards wrong. According to it one of my 970s is getting almost 500k ppd, but when I pause that 970, the utilization on my 1070 goes to 0%. So it looks like the 1070 is getting the ppd it should, but is being labeled wrong in FAH? Has anyone ever had this issue?
 
Last edited:
The fah client has gpu index detection issues sometimes, particularly when gpus are moved around. You can try reinstalling the client or manually adjust the indexes for each slot using fahcontrol. The default is -1 for all the indexes, which allows the client to auto detect. Try setting the opencl and cuda indexes for the first slot to 1 and the second slot to 0. Then manually start one slot and see if the wu and load are on the correct gpu. You need to pause the slots prior to making the changes, then resume folding. you can right click on specific slot in fahcontrol to manually pause/start the slot.
 
I'm running into a problem with dual 1070s in the same system with no slot between them (I'm using a lenovo D30 that only has two PCIE slots). The temperature of the upper card is really high (>90 C), and it's causing the card to downclock a bit so that it doesn't exceed the thermal limit. Has anyone else observed this? I used to run dual R9 280x in a similar configuration, and while it got really hot, there wasn't any type of thermal throttling. Is there any way, short of going to watercooling, to keep the temperature down?
 
I have a similar problem, the same 3 GPUs have been in the same slots but sometimes the Order of them changes, Slot 0 is not always the top card, but it seems to like Placing Slow 1 as my 3rd card, or the one in the lowest slot. I've deleted everything F@H and reinstalled with the GPUs unplugged so they weren't detected and tried to assign each card to the proper slot for it to work once until a restart. I figured it was only a problem if you used 3 identical cards and an AMD issue, but nice to see NVidia has the same issues, makes me smile that NVidia isn't perfect :escape:
 
Apologies for the continued thread jacking but I moved one the overheating GTX 1070s from my dual GTX 1070 machine to another comp with the GTX 1080, since it has enough PCIE slots so that the two cards aren't right next to each other.

Unfortunately this new machine is not liking the second card. The system runs OK for a bit, but then it just hangs with no BSOD or anything. Needs a hard restart. I disabled the second cards folding so my PPD isn't totally tanked but I'm trying to figure out the problem. Anyone also have this problem?

The system I'm running is a
Mobo: MSI SLI Plus X99
CPU: Xeon 2679v4 20 core 3.0 Ghz (200w TDP, a server OEM chip), HT disabled (the work programs I use don't benefit)
RAM: 8x8 GB DDR4 2400 Mhz
GPU: MSI Armor GTX 1080, MSI Armor GTX 1070
PSU: 750w 80 Plus GOLD, here's the review for 850w http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story5&reid=206
OS: Windows 7 SP1, fully updated
Drivers: NVIDIA 376.48 (Hotfix for folding)


CPU is running SMP folding. 18 cores on SMP folding. 2 reserved for GPU feeding.

I've narrowed down to either two problems.

1. Xeon 2679v4 does not have proper support for 2nd GPU in its PCIE controller. This was a chip custom designed for a datacenter needing high clocks and high threads. A beast chip but maybe not with full PCIE support if it wasn't needed by the specific customer. It's pretty niche so there's not much documentation. There are some reports of it being used for SLI but as we all know folding can push things a lot harder.

2. Power supply too weak. (200w CPU + 150w + 160w = ~ 510w + 50w for ram and such) It should still be able to handle this load, but maybe it's the sustained load, or the PSU has just degraded/is faulty.

Any insight?

Edit: Oh there's also some slight BCLK overclocking (100 mhz -> 103.6 mhz). That's all you can get with any Xeon. Maybe that's enough to mess up the PCIE controller? I'll try reverting it tomorrow and testing without the OC. No GPU overclocking.

I've also established the crashing isn't due overheating. All temps are reasonable (<70 deg C)
 
Last edited:
I got it in 2011 I believe. Used it for several different systems. I'll test removing the overclock first and testing some other PCIE slots. I'll then look at the PSU.
 
^ +1 to this

It looks like I'm getting 450k ppd with my 1070, and 150k ppd each for my 970s. Is it just me or do all those numbers seem a little low?

All my hosts are running linux client, so I do not have any project benchmark data for windows client. For linux clients with overclocked gpu, my middle of range expectation is ~720,000 ppd for GTX1070 and ~330,000 ppd for GTX970. There is a good bit of variation in the ppd for the various projects, so need to look at ppd for specific project. I have been using HFM to monitor my clients for a long time and have extensive benchmark data and wu history for my gpus on many different projects. If you tell me a specific project, I can let you know whether the ppd typically falls in middle, low or high range.

I normally start each gpu slot individually and monitor the GPU frequency/temperature/load with a utility to ensure the project is running ok on the intended gpu.

First thing to look at is the log files. Need to see if any issues. Search the log file for "bad". Failed wu and "restarting from checkpoint" are usually indications of some system stability issue. Could be gpu, cpu, memory or hard drive issue.

Things which can affect ppd:
thermal. 3x gpu, spacing makes heat removal difficult without riser(s) or water.
driver (some are faster than others)
project (ppd variation)
cpu (need one thread/core per gpu)
system ram (some gpu projects need up to 4GB system ram at initialization, so best to have 4GB system ram per gpu)
screen activity interferes with folding
two projects running at same time on same gpu (slot index issue).
 
I'd start with removing the overclock but I think its PSU related. How old is that PSU?

Dropping the BLCK from 103.6 back down to 100 did the trick. Unsuprisingly, even pushing the BCLK a little is enough to mess up the PCIE controller. The system is now stable with both GPUs humming along. 1.5 mil PPD on this one system alone. I'm looking for another GPU for my D30 that runs a bit cooler or has a better cooling config so that I use it in the back to back PCIE slot configuration. Suggestions welcome. I'm thinking a GTX 1060.
 
If you go 1060 get the 6gb model, it has more cores and earns more PPD. I think they are good for 400k PPD

If you are looking for coolers that work best when the cards have no space between them you have 2 choices, blower style cooler or normal cooler with some fans mounted to the side of the case forcing air between the cards.
 
Back