• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Server did not assign work unit

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Farwalker2u

Member
Joined
Mar 1, 2003
Location
Georgia
Hey folks.
This morning I noticed that one of my nvidia GTX 780s had a status of "WAIT: X mins XX secs".
The wait times keep getting longer and longer.
So I tried deleting the work folder. No joy.
So I tried re-installing. Now both 780s have a "WAIT" status. Went from bad to worse.

Running client 7.3.4, Windows 7 Prof 64-bit. Using "client-type" "advanced".
Code:
*********************** Log Started 2014-05-21T16:37:56Z ***********************
16:40:14:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
16:40:14:WU01:FS00:News: Welcome to Folding@Home
16:40:14:WU01:FS00:Assigned to work server 171.64.65.93
16:40:14:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:GK110 [GeForce GTX 780] from 171.64.65.93
16:40:14:WU01:FS00:Connecting to 171.64.65.93:8080
16:40:15:ERROR:WU01:FS00:Exception: Server did not assign work unit
16:40:15:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
16:40:15:WU01:FS00:News: Welcome to Folding@Home
16:40:15:WU01:FS00:Assigned to work server 171.64.65.93
16:40:15:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:GK110 [GeForce GTX 780] from 171.64.65.93
16:40:15:WU01:FS00:Connecting to 171.64.65.93:8080
16:40:16:ERROR:WU01:FS00:Exception: Server did not assign work unit
16:41:15:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
16:41:15:WU01:FS00:News: Welcome to Folding@Home
16:41:15:WU01:FS00:Assigned to work server 171.64.65.93
16:41:15:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:GK110 [GeForce GTX 780] from 171.64.65.93
16:41:15:WU01:FS00:Connecting to 171.64.65.93:8080
16:41:16:ERROR:WU01:FS00:Exception: Server did not assign work unit
16:42:52:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
16:42:53:WU01:FS00:News: Welcome to Folding@Home
16:42:53:WU01:FS00:Assigned to work server 171.64.65.93
16:42:53:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:GK110 [GeForce GTX 780] from 171.64.65.93
16:42:53:WU01:FS00:Connecting to 171.64.65.93:8080
16:42:53:ERROR:WU01:FS00:Exception: Server did not assign work unit
16:45:29:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
16:45:30:WU01:FS00:News: Welcome to Folding@Home
16:45:30:WU01:FS00:Assigned to work server 171.64.65.93
16:45:30:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:GK110 [GeForce GTX 780] from 171.64.65.93
16:45:30:WU01:FS00:Connecting to 171.64.65.93:8080
16:45:31:ERROR:WU01:FS00:Exception: Server did not assign work unit

Suggestions to get these folding again?
 
Did you set the CPU core count to -6? GPUs need some CPU underhead to work properly (my understanding).

Only other thought is to uninstall/reinstall the client.
 
I had the same problem. I think they are out of advanced WUS. Switched to norm and good to go
 
I had the same problem. I think they are out of advanced WUS. Switched to norm and good to go
Janus67,
Removed the "advanced" tag stuff and the two GTX780s seem to be folding again.
Thank you.

One is folding a 9406 WU, the other folding a 9408 WU. I will have to wait and see how these new WUs produce PPD.
 
Janus67,
Removed the "advanced" tag stuff and the two GTX780s seem to be folding again.
Thank you.

One is folding a 9406 WU, the other folding a 9408 WU. I will have to wait and see how these new WUs produce PPD.

:thup:

I'm getting approximately the same estimated PPD from the machines since I dropped it.
 
About time you got back in business, I did not want a easy win, I came close to passing you. :)
 
About time you got back in business, I did not want a easy win, I came close to passing you. :)
I was not teasing you on purpose.
You still might pass me if I keep getting these 7620 and 7621 WUs. Instead of getting around 300,000 PPD on my two GTX 780s I'm getting only 48,000 PPD.
 
I was not teasing you on purpose.
You still might pass me if I keep getting these 7620 and 7621 WUs. Instead of getting around 300,000 PPD on my two GTX 780s I'm getting only 48,000 PPD.

Not if I keep getting core 17 stuff (2-3 days to complete) :mad::rofl:
 
Found this thread:
https://foldingforum.org/viewtopic.php?f=18&t=26361

Says that whomever was in charge of the assignment server messed up!

by msultan » Wed May 21, 2014 5:49 pm
Hi,
Thanks for all the feed back and sorry about the mess up. It was completely my fault. P9101 was ending and I was starting a new project P9102 but forgot to inform the assignment server. The project has been released and should be assigning work now. Let me know if there are any further problems.

beta project link:
viewtopic.php?f=66&t=26363

Happy folding,
Muneeb

I'm going back to the "client-type" "advanced" setting on my two GTX 780s and hopefully crank out some serious PPD! I am frustrated with only getting about 25% of what I used to get.:mad:

EDIT 12:30 P.M. Eastern 5/26/2014
Eureka got a 9102 WU! :thup: :clap:
 
Last edited:
This advanced stuff applies to older gen AMD's?
If so sweet :D

Wonder if they are better than the 13000 WU's I get
 
This advanced stuff applies to older gen AMD's?
If so sweet :D

Wonder if they are better than the 13000 WU's I get

I am currently folding two 13001 WUs on my GTX 780s. I seem to be getting around 140K - 185K PPD on each GPU. This is so much better than the 76XX WUs that yielded between 24K -48K PPD each.
 
I am currently folding two 13001 WUs on my GTX 780s. I seem to be getting around 140K - 185K PPD on each GPU. This is so much better than the 76XX WUs that yielded between 24K -48K PPD each.

Wait, the "client-type" "advanced" setting gets you 13000's WU's?
...

I'm already getting those...
Unless 13001 WU's are advanced for Nvidia, then there might be another "advanced" WU for AMD. :shrug:
 
I think no flag still gets a mix of core 15 and core 17 work, while advanced is pretty much all core 17, at least if you have Kepler cards. Several weeks ago Dr. Pande mentioned trying to fix the assignment logic to give more core 17 work to Kepler cards since they're so much better at it than Fermi.

There are no vendor-specific projects with core 17.
 
Back