• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

GPU 2 Folding issue (Priority Issue?)

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Norcalsteve

Member
Joined
Sep 19, 2009
Location
Crestview, FL
I have an issue with my #2 GPU (GTX 285), on a 353 Reward WU, i normally get 8500-9500 PPD, but now i am getting 4000 PPD on it. Still get 8k-9k on my 1st GPU.

Recently, i finished a -smp 7 VM Bigadv WU, with a -oneunit flag, to change it back to -smp 8, because, thanks to ChasR, i had my "Priority's" straight. Now, I have been gone all day... and the GPU only has folded 4 WU, and my other is up 9. Is there an environment variable i can input for my second GPU to get a priority boost in it?

My set up now consists of:
VM Ware with -smp 8 (Both VM priority.grab/ungrab "Idle" commands input to VM client)
GPU1 is Low priority ("slightly elevated)
GPU2 is Low priority ("slightly elevated)
and i have the NV_...... value of 15 Environment Variable input to Windows

Any thoughts? I know its not the WU's, cause history shows i have folded them before, and i should be getting 33sec/frame, now its up to ~2min/frame for same project.:-/
 
I have an issue with my #2 GPU (GTX 285), on a 353 Reward WU, i normally get 8500-9500 PPD, but now i am getting 4000 PPD on it. Still get 8k-9k on my 1st GPU.

Recently, i finished a -smp 7 VM Bigadv WU, with a -oneunit flag, to change it back to -smp 8, because, thanks to ChasR, i had my "Priority's" straight. Now, I have been gone all day... and the GPU only has folded 4 WU, and my other is up 9. Is there an environment variable i can input for my second GPU to get a priority boost in it?

My set up now consists of:
VM Ware with -smp 8 (Both VM priority.grab/ungrab "Idle" commands input to VM client)
GPU1 is Low priority ("slightly elevated)
GPU2 is Low priority ("slightly elevated)
and i have the NV_...... value of 15 Environment Variable input to Windows

Any thoughts? I know its not the WU's, cause history shows i have folded them before, and i should be getting 33sec/frame, now its up to ~2min/frame for same project.:-/

send me those 285's that will fix your issue.

ChasR should chime in.

I started my winsmp2 and I've only made 2k points since lastnight and completed 5 work units(between the gpu and the cpu) I don't know if I like this winsmp2, I think I was completing more units and getting more points with my vmplayer..

Maybe I'm crazy idk

Steve I hope you get it figured out
 
In config setup for GPU,s did you set Advanced methods yes and CPU Affinity lock disabled yes

Then set Affinity for GPU,s to use all cores in Task manager
 
I've had this happen on a dual gpu system. It seems the later drivers try to save power by dropping the clocks on the 2nd card. To fix this I forced 3d performance clocks with RivaTuner.
Start RT/Power User/Expand RivaTUner\NVIDIA\Overclocking/right click on EnablePerfLevelForcing/click on the light bulb and select OK. RT will close. Start it again. Main/Driver settings/customize/System settings/ select performance 3d in the Force Constant Performance Level for each gpu. Reboot.
Another easier solution is in the nVidia control Panel/Manage 3D settings/Global settings/ for some cards there is a selection to change from adaptive perfromance to performance. If you don't have these nVidia settings, use RT.
 
In config setup for GPU,s did you set Advanced methods yes and CPU Affinity lock disabled yes

Then set Affinity for GPU,s to use all cores in Task manager

Wow, the Affinity's only use 1,2,3,4 i had to "Select all" for 0,1,2,3,4,5,6,7... also, i have to do it every time i open the GPU clients... is there a setting to make the affinity's default to All cores/threads?

I've had this happen on a dual gpu system. It seems the later drivers try to save power by dropping the clocks on the 2nd card. To fix this I forced 3d performance clocks with RivaTuner.
Start RT/Power User/Expand RivaTUner\NVIDIA\Overclocking/right click on EnablePerfLevelForcing/click on the light bulb and select OK. RT will close. Start it again. Main/Driver settings/customize/System settings/ select performance 3d in the Force Constant Performance Level for each gpu. Reboot.
Another easier solution is in the nVidia control Panel/Manage 3D settings/Global settings/ for some cards there is a selection to change from adaptive perfromance to performance. If you don't have these nVidia settings, use RT.

K, well i changed it in the Nvidia's global settings, and i am still at high frame time, but when the workers complete, i will d/l Riva, since i use EVGA and will try to force it.

Thanks guys
 
Wow, the Affinity's only use 1,2,3,4 i had to "Select all" for 0,1,2,3,4,5,6,7... also, i have to do it every time i open the GPU clients... is there a setting to make the affinity's default to All cores/threads?

I used Set Affinity II from Edgemeal software to set affinity of any Program including FAH and it stays when any WU changes ect
You can set up for any program running

Download link http://edgemeal.110mb.com/SetAffinity/index.htm
 
I've been folding my 2 GTX 285s for about a month or so and I have never got more than 12,000 points from the both of them together. I just started realizing I should be getting nearly 11,000 points from each card! Not sure what's going on here but it seems when I restart the GPU clients the PPD will show about 9,000 PDD each but then after a few minutes one GPU will drop to 7,000 and the other will drop to 4,000.
 
Tell me how you are currently configured.

I followed Thid's guide http://www.overclockers.com/forums/showpost.php?p=6061758&postcount=4

I've got a monitor for each GPU, I've extended the desktop and tried everything that Norcalsteve has done and got the same results. Something kind of strange though, since I made that post HFM is reporting between 8200 and 8700 PPD for each card now. Not sure what I did but I have never seen both cards producing that much. Usually 1 GPU will hit about 7,000 and the other only 4,000. I'll keep my eye on them and see when the PPD drops.

Here's my config.......
GPU1
Code:
[settings]
username=dylskee
team=32
passkey=
asknet=no
machineid=3
bigpackets=big
extra_parms=
local=460

[http]
active=no
host=localhost
port=8080
usereg=no
proxy_name=
proxy_passwd=

[core]
priority=96
cpuusage=100
disableassembly=no
nocpulock=1
checkpoint=15

[power]
battery=no

GPU2
Code:
[settings]
username=dylskee
team=32
passkey=
asknet=no
machineid=2
bigpackets=big
extra_parms=-gpu 1
local=414

[http]
active=no
host=localhost
port=8080
usereg=no
proxy_name=
proxy_passwd=

[core]
priority=96
cpuusage=100
disableassembly=no
nocpulock=1
checkpoint=15

[power]
battery=no
 
I've got a monitor for each GPU, I've extended the desktop and tried everything that Norcalsteve has done and got the same results. Something kind of strange though, since I made that post HFM is reporting between 8200 and 8700 PPD for each card now. Not sure what I did but I have never seen both cards producing that much. Usually 1 GPU will hit about 7,000 and the other only 4,000. I'll keep my eye on them and see when the PPD drops.

If I remember right, you recently moved your GPU's priority from 0 to 96 (idle t low).

Is it possible that the change in PPD took place around the same time?

If you really want to test, you could switch back for a bit, but I wouldn't suggest that. But running the GPUs on idle priority before sound like might be responsible for your prior performance.
 
Are you running in SLI?

Add -gpu 0 to the first config in extra_params.

Do you have two monitors attached, one to each card, or a dummy plug on the 2nd card? If not, you have to run in SLI and won't make as many points.

You have priority correct on the gpus.

I presume you have the SMP client set at idle priority?
 
If I remember right, you recently moved your GPU's priority from 0 to 96 (idle t low).

Is it possible that the change in PPD took place around the same time?

If you really want to test, you could switch back for a bit, but I wouldn't suggest that. But running the GPUs on idle priority before sound like might be responsible for your prior performance.

I changed the priority yesterday and just noticed the change in points tonight. But each GPU has already dropped to about 6700 PPD as of right now.

Are you running in SLI?

Add -gpu 0 to the first config in extra_params.

Do you have two monitors attached, one to each card, or a dummy plug on the 2nd card? If not, you have to run in SLI and won't make as many points.

You have priority correct on the gpus.

I presume you have the SMP client set at idle priority?

No I'm not running in SLI and yes I have a monitor for each GPU. That was my problem to begin with, somehow I had SLI enabled so I disabled and restarted the clients and it seemed to help for a couple of hours but then died off again. And yes I have the SMP set to idle priority. I will go ahead and add that -gpu 0 to the extra params but I'm real confused as to why my PPD seem to just fall off so bad. By morning one of them will be down to 4000 PPD.
 
Well, i have SLI disabled too, but no dummy plug, but Windows 7 still lets me extend my monitor to a second desktop on the card and the -gpu 1 (card #2) client sees that i have it up too. Even though its disabled and "unlinked" via the NV panel, should i just physically unlink them too? anyone got a link for a "build it your self dummy plug from old monitor cables?" lol, i will order one now.

Also, i guess the "NV_FAH_CPU_AFFINITY value 15" did not work in Environment Variables. i will use a "Affinity program" i been reading about online and have that prog auto set it for me.
 
Well, i have SLI disabled too, but no dummy plug, but Windows 7 still lets me extend my monitor to a second desktop on the card and the -gpu 1 (card #2) client sees that i have it up too. Even though its disabled and "unlinked" via the NV panel, should i just physically unlink them too? anyone got a link for a "build it your self dummy plug from old monitor cables?" lol, i will order one now.

Also, i guess the "NV_FAH_CPU_AFFINITY value 15" did not work in Environment Variables. i will use a "Affinity program" i been reading about online and have that prog auto set it for me.

PM me your address, and how many plugs you need, and I'll mail them to you.
 
Also, i guess the "NV_FAH_CPU_AFFINITY value 15" did not work in Environment Variables. i will use a "Affinity program" i been reading about online and have that prog auto set it for me.

Found a MAJOR issue with that Variable... a value of "15" for a core i7, is cores 1,2,3,4... That Value is used for dual cores up to hyper threading. so it turns off affinity for cores 0, 5,6 and 7. Any body know the affinity value for a core i7 for all 8 cores?
 
AHHH HAAAA!!!!
OK FOUND IT!!!

For all you i7 GPU folders:
Environment Variable:
NV_FAH_CPU_AFFINITY Value 255

The 255 value on a core i7, locks all cores in for use!

WHOOT!

Value 15 is for you quad core guys
 
Found a MAJOR issue with that Variable... a value of "15" for a core i7, is cores 1,2,3,4... That Value is used for dual cores up to hyper threading. so it turns off affinity for cores 0, 5,6 and 7. Any body know the affinity value for a core i7 for all 8 cores?

15 SHOULD be cores 0 - 3. Try NV_FAH_CPU_AFFINITY 255

edit: and that's what I get for getting distracted while reading a thread. I guess I'll expand to make my post useful

Go http://faculty.plattsburgh.edu/albert.cordes/bindec.html there and put in the cores that you want to use.

The cores are ordered 7 6 5 4 3 2 1 0

If you only want to use cores 3 - 0, then put in 00001111.
To only use cores 7 - 4, then use 11110000.
To use all cores put in 11111111.

The decimal number you get back is the number you want to use with NV_FAH_CPU_AFFINITY
 
Last edited:
AHHH HAAAA!!!!
OK FOUND IT!!!

For all you i7 GPU folders:
Environment Variable:
NV_FAH_CPU_AFFINITY Value 255

The 255 value on a core i7, locks all cores in for use!

WHOOT!

Value 15 is for you quad core guys

i7 is a quad core...... So are you saying if you fold an i7 and GPUs to use the value 255 and if you're just folding your cpu to use the value 15? I noticed in the task manager that cores 0-3 were ticked for the GPUs, they all should be ticked right?
 
Back