• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

GPU Performace...

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Norcalsteve

Member
Joined
Sep 19, 2009
Location
Crestview, FL
Emphasis, High Quality, or High Performance for folding? What recommended settings do you Daul GPU client people use? specifically around the GTX285, or abouts...

EDIT: Missed a whole darn paragraph!

Anywho... My second GPU is still being a pain. Some times it is right up there with my first GPU but ANYTIME i CTRL+C to do something and start it back up... my second GPU loses 1/2 speed! from 8k PPD to 4k PPD (for you TPF'ers, approx 35sec/Frame to 1:10/Frame) What is gooing on. I noticed the only way to fix it 80% of the time, is to turn off ALL clients, and restart the computer. I restart once every 24hrs any ways... I only asked the above question because i was just curious about Global Settings for the cards...
 
Last edited:
It doesn't sound like a settings issue really, but are both clients set to the same priority (idle/low)? Are you running any CPU clients? If so what flags are you using when starting the client and what is its priority?

Are you running updated drivers? If not try something from the 195 series. Also you may want to finish off these wu's by adding the -oneunit flag, then when they are done and have sent the wu back, delete both GPU clients and download a fresh copy and set them up again. Some of the cores have seen updates without the version numbers changing, and I know one update involved how the cores act in multi GPU systems.
 
Yeah Zerix01 is right, I was having the exact same issue with my two GTXs with the newer drivers. I'm back to 191.07 for windows 7 and both cards are doing between 7900~9000 ppd with a cpu smp -bigadv client running as well. As soon as I update the drivers one of the gpus will drop to 4000 ppd.
 
K, that must be it then, i am running the typical new NV driver, 195... i will try backing it off one. Isnt there a 196 beta driver out too? has anyone tried it out and had any success with it folding?
 
wow i cant read!
I even replied to that thread... i thought he meant 195 when he wrote it and i 'mis-read' it.

/facepalm

K, then back drive to 191 it is then
 
Some of the cores have seen updates without the version numbers changing, and I know one update involved how the cores act in multi GPU systems.

How long ago did this happen? do they ever list changes when it does, and where? if it matters, i have been folding since Jan 17th... Jan 20th i started GPU folding, there abouts...

Also, i am low priority on the GPUs and Idle on the VM/Linux/Bigadv client...
 
I don't really remember. I normally see updates here when that happens. I think if you started in January then you should be up to date.
 
1.31 allows cards with different shader counts to be used without loss of performance in one or more of the cards.
 
Yeah, well i think its OK, for now...
I just got a Dummy Plug in from torin3 (thanks buddy!) so i am able to use the NV control panel for multiple monitors now, instead of the Windows client, which NV would break if i opened the control panel...

So, how it is now (and when i semi worked before) i have noticed when only one card is running, i get 9k-9.5k PPD and nice TPF on the 353's (00:33-31)... when both cards are running, they trade off with the speed, first one fired up with the client is 8k PPD the second is 7-7.5K PPD, then as they respectively finish WU's and start new ones, they swap with increase/decrease performance... basically switch between 7-7.5k to 8k PPD when one finishes a WU and the other is still completing one (sounds confusing i know).

Basically, i think my system is loaded down, with Bigadv runing on VM, and x2 GTX285's with OC'ed shaders. 7-8k PPD per card (on your average WU) is normal for my set up. I was expecting 9k per card with VM/Bigadv

As for my overall software set up now. I am running the 196.31 betas now (they fixed the issue with "breaking rivatuner"), and i am now also using Rivatuner with ChasR's recommended tweaks from the last time i was having issues, instead of EVGAprecision. and now, everything seems normal.... sofar, Both cards i get 7-8k PPD on them (thats with chewing through Bigadv's on it too), with a total of 30 WU's completed since last night... no performance drop in the second card... Heres to hoping it sticks!!

Just wish i can speed the cards up!

That seem normal to you guys for my set up?
 
To ChasR:

7. The proper start string is -bigadv -smp X -verbosity 9 (order doesn't matter) either in extra paramaters of advanced configuration or in the start string. X = number of FAH processes to run. You can set this number to number of cpu cores - 1, to allow cpu cycles for multiple GPU clients.

This being from your guide... a while back you had told me, -smp 8 is sufficient for my setup, and should have no impact on my x2 GTX285's, and the -smp 7 flag is more for ATI cards (something along those lines)... so the big questions is, should i dedicate a core to my GPU's now that there is a possible performance hit? or stay at -smp 8? what would i net gain/loss out of -smp 7/-smp 8? the way i see it, -smp 7 has both GPU's fold faster, and possibly make up for loss of PPD on bigadv due to big's core loss and TPF increase, were as, -smp 8 gives a better TPF, but slower cards...

ahhh, the choices!
 
I'm not folding with GPUs on the i7 I'm working on. On C2Qs with 2 high end gpus, with The SMP or SMP2 client at idle priority, the gpus at low priority, and gpu affinity spread across all cores, GPU production is unaffected by the SMP(2) client. Turn the SMP(2) client off and there is 0 change in gpu production. On the other hand, SMP production is drastically affected by the GPUs especially when they're running 353 point WUs (5:07 TPF to 6:18 TPF on p6025 on a Q6600 @ 3.4).

I don't know why an i7 would be any different, a low priority process should never yield CPU cycles to an idle process. In your case it is as if all clients are running the same priority, which IIRC, you've made certain isn't the case. Just for jollies, please make sure one more time by searching for client.cfg to be certain you're not checking gpu priority in a file that's not actually in use (reason: if you install the systray client and then the console client, you'll have client,cfg in two locations). Also note that the VM will run at normal priority if open and grabbed and that will slow gpu production. You can minimize it at all times to be sure it's not grabbed or you can try adding priority.grabbed="idle". THe later is reported by someone else to work, but it shouldn't and I've not bothered to test it. If you use task manager, the act of checking the priority of the VM ungrabs it so it will report low priority (really idle, but low is as low as TM goes) no matter what.

So, assuming your check shows priority is indeed correct, we can either deal with the way things should be, or the way they are. You can create sticky affinity for the VM by adding the lines to the .vmx file:
processorX.use = "TRUE"
do this for processors 0 through 6 and add the line:
processor7.use = "FALSE"
THen remove the affinity lock on the gpus if set and set the nVidia environment variable, NV_FAH_CPU_AFFINITY to 128 to lock the gpus to 7
That'll do it.
 
wow,
I had no clue having VM "open" i.e. displaying Ubuntu on the desktop, raises the priority.... i opened task manager and checked and seen that the priority was at normal, now minimized, its low... and task manager showed that. I will play with it a little more. also... should i have my GPU's "Disable CPU affinity lock" No or yes? I have always had it as disabled [yes], leaving the ability for it to hop from core to core when needed... but i also have the NV_FAH environment variable to all cores... i am testing with [no] now...

I would rather leave smp 8 since my GPUs are not TOOO bad in production, just a little under what the norm is for your average WU
 
You should get more ppd on the SMP client with the gpu affinity lock off and gpus running across all cores.
 
K, so NOW I AM ANGRY! At my comp... not my kid... but my 20 month old son got up on my Comp desk and hit the reset key on my rig! (I know, good parenting, right?) But what makes me mad, is a few weeks back when i started messing with Bigadv, and noticed i had stability issues, my client would lock up, i would hard restart, log back into VM, and BAM, started off where it was... and this happend multiple times. NOW, everything is running SMOOTH, and i have a "hard reset" and my WU got corrupted 52% in! thats ONE whole days work!!!

Anywho, since i had that screw up and i am trouble shooting this... I tried the settings out, and locked the affinity of the GPUs to the 8th core... seems to be runing smoothly, and as for the processorX.use = "TRUE" i tried that, still ran -smp 8 flag, but took 43 min for the first frame... even with smp 8, the VM only let it use 7 cores. and with the GPUs locked to the 8th core, it was only really using 15-20% load... made the last processorX.use = "TRUE", and the GPUs are still pumping well.

All that, and my GPUs are back at 9300k PPD (about) and the VM is running 32-33 min/frame.

Much better... EDIT: GPU1 is dropping slowly, started at 9300... now its down to 9000...

As for your last post ChasR, What do you mean when you say "You should get more ppd on the SMP client with the gpu affinity lock off and gpus running across all cores." Thats how i have been running, the GPUs... and you are talking about SMP with Big's on VM, right? pardon my ignorance.
 
Last edited:
I thought you'd figured out your priority issue was leaving the VM open and grabbed and were going to run -smp 8. Now you're halfway in between. If you're going to lock the gpus to one core, you need to run the SMP client with the -smp 7 flag on the other 7 cores. If you're going to run -smp 8, you need the gpus to be spread across all cores. The idea is to balance the smp processes so that all threads don't wait on one or two that are slowed by a gpu process running at a higher priority.
 
Last edited:
wow,...
i opened task manager and checked and seen that the priority was at normal, now minimized, its low... and task manager showed that.

Steve, I think you need to be sure you have the .vmx file edited to include:
priority.ungrabbed="idle" . You can't open task manager to check on the VMs priority without the VM being ungrabbed. It should show as low in TM if set correctly in the .vmx even when open, as long as mouse and keyboard context isn't in the VM (and it can't be if it is in TM). I believe VMware Player lacks the priority setting in its gui because it automatically runs at idle if minimized.
 
.vmx file is edited (since initial setup) with priority.ungrabbed="idle" and priority.grabbed="idle" showing low on TM

well i think i am all done here... either way, i am pumping nice PPD and already almost at 1mil and only been at this since 17 Jan. Thank you for your input ChasR, I think i am all done with the tweaks on this rig for now.



EDIT: nm edit
 
Last edited:
I know we've been over this before and I think you've even posted up the .vmx. I keep bringing it up because there are several team mates that continue to have obvious priority conflicts, when it shouldn't be possible. With just priority.ungrabbed= "idle" in the .vmx it should not be possible to ever see the VMs priority at normal in Task Manager even if it is open. At my peak VM usage, I probably had 40 or 50 VMs running and never had a priority conflict. It is just my nature to try find out why it doesn't work. In any event, minimizing the VM sloves the problem.
 
I think i was over working my computer with its GPUs, I went on a vacation the last two days, and ran the VM Big's client with SMP 7, and my GPU's NEVER dropped below 8900 PPD.... i know thats generic in terms of what WU's are being ran... Also... my TPU for my Bigadv client only added 30 sec TPF (sitting at 33:15 TPF average now)

I set up my VM to lock in on the first 7 cores, and made the 8th core not available to it, then I made the GPUs only use the 8th core, so there was absolutely NO conflict on CPU's or priorities if there was one. I know thats not how you will do it, but i think it just dependent on my system and how its set up... i am a newb so it could be anything, i jsut dont see how since i have set ALL prioritys in EVERY spot i know how to include were you told me.

Over all now, i have a constant 44-45k PPD with GPUs turning and burning like i want them with an ever so slight loss on the VM (hardly a change)... instead of 38-41k PPD with my GPUs going out or slowing down in speed every new WU.

Thanks again ChasR
 
Back