• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

P10xxx series GPU WUs

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Norcalsteve

Member
Joined
Sep 19, 2009
Location
Crestview, FL
Gawd, these wu's are the equivalent of P6071 SMPs... Slower than the others... Does this series of WUs make your cards run a lot hotter too? I am running the cards in my sig, and they are about 5-10c hotter than on the other WUs for GPUs... Plus, that is all I have been getting lately
 
i am running P10946 and P10955, PPD gain is 18.5k for each card, running normal temps, and it takes 1hr and 24 mins to complete . . .
this is with gpu3 client though, so it might be different
i think these projects just have more steps.
 
Yeah... The question was more geared to us "lower card guys" there high-speed ;-)

Just seemed like the days worth of WUs shot my temps up, and slowed down a touch... But I am back to 18k ppd total for both cards with the normal WUs as of this morning

Toss me one of your 480s!!!
 
seems pretty normal to me

Code:
Project ID: 10111
 Core: GROGPU2
 Credit: 494
 Frames: 100

 Name: E759 285 GPU1
 Path: \\GeneralMac-PC\gpu0\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:00:46 - 9,278.6 PPD
 Avg. Time / Frame : 00:00:48 - 8,892.0 PPD
 Cur. Time / Frame : 00:00:47 - 9,081.2 PPD
 R3F. Time / Frame : 00:00:47 - 9,081.2 PPD
 All  Time / Frame : 00:00:47 - 9,081.2 PPD
 Eff. Time / Frame : 00:00:47 - 9,081.2 PPD

Name: E757 250 GPU1
 Path: \\Htpc-pc\gpu\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:01:21 - 5,269.3 PPD
 Avg. Time / Frame : 00:01:23 - 5,142.4 PPD

Name: E758 285 GPU1
 Path: \\BIGADV\gpu\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:00:50 - 8,536.3 PPD
 Avg. Time / Frame : 00:00:50 - 8,536.3 PPD
 Cur. Time / Frame : 00:00:51 - 8,368.9 PPD
 R3F. Time / Frame : 00:00:51 - 8,368.9 PPD
 All  Time / Frame : 00:00:50 - 8,536.3 PPD
 Eff. Time / Frame : 00:00:50 - 8,536.3 PPD

temps1.jpg
 
10XXX is too broad. It encompasses some of the highest and lowest producers. One factor at work in the range is if you're running GPU3 on a non Fermi card, the OPENMM WUs are going to be a lot slower than the FahCore_11 WUs.
 
Ah, sorry, now that I recall some of the 10xxx WUs run fast... I was referring to 10110 and 10112 mainly... Should have just typed them out. I use the regular GPU2 client using fahcore_11. I get 8200 ppd on those with my 285s, and they also run hot.

All the other gpu WUs give me ~9500 ppd, and run cooler... Just wanted to see if that was normal for those two WUs.
 
I would concur that p10111 is at the lower end of the production scale, but it's not the worst. Haven't done a p10112.
 
ChasR, I have a gts250 1 gb no O/C running the gpu3 client.
and found a log for a 10111 and a 11179. Now of course the 250 is slower than a 285(is not a 250 a glorified 9800?) but my question is this. On the 10111 is it not still using a core 11 according to the log? I know it is running gpu3 due to the line "Gpu type=2 species=30". I see core 11 and core 15 in the gpu folder.
just a question, Im not good at such things lol:shrug:

the first is a 11179(core 15) then follwed by 10111(core 11)

Code:
[00:49:15] + Attempting to get work packet
[00:49:15] Passkey found
[00:49:15] Gpu type=2 species=30.
[00:49:15] - Connecting to assignment server
[00:49:16] - Successful: assigned to (171.67.108.31).
[00:49:16] + News From Folding@Home: Welcome to Folding@Home
[00:49:16] Loaded queue successfully.
[00:49:16] Gpu type=2 species=30.
[00:49:17] + Closed connections
[00:49:17] 
[00:49:17] + Processing work unit
[00:49:17] Core required: FahCore_15.exe
[00:49:17] Core found.
[00:49:17] Working on queue slot 02 [November 5 00:49:17 UTC]
[00:49:17] + Working ...
[00:49:17] 
[00:49:17] *------------------------------*
[00:49:17] Folding@Home GPU Core -- Beta
[00:49:17] Version 2.09 (Thu May 20 11:51:02 PDT 2010)
[00:49:17] 
[00:49:17] Compiler  : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42 for 80x86 
[00:49:17] Build host: amoeba
[00:49:17] Board Type: Nvidia
[00:49:17] Core      : 
[00:49:17] Preparing to commence simulation
[00:49:17] - Looking at optimizations...
[00:49:17] DeleteFrameFiles: successfully deleted file=work/wudata_02.ckp
[00:49:17] - Created dyn
[00:49:17] - Files status OK
[00:49:17] sizeof(CORE_PACKET_HDR) = 512 file=<>
[00:49:17] - Expanded 44563 -> 170279 (decompressed 382.1 percent)
[00:49:17] Called DecompressByteArray: compressed_data_size=44563 data_size=170279, decompressed_data_size=170279 diff=0
[00:49:17] - Digital signature verified
[00:49:17] 
[00:49:17] Project: 11179 (Run 13, Clone 78, Gen 1)
[00:49:17] 
[00:49:17] Assembly optimizations on if available.
[00:49:17] Entering M.D.
[00:49:24] Tpr hash work/wudata_02.tpr:  1820383362 2940293628 419336615 1318853229 951300772
[00:49:24] Working on ALZHEIMER'S DISEASE AMYLOID
[00:49:24] Client config found, loading data.
[00:49:24] Starting GUI Server
[00:53:11] Completed 1%
[00:56:47] Completed 2%
[01:00:23] Completed 3%
[01:04:00] Completed 4%
[01:07:36] Completed 5%
[01:11:13] Completed 6%
[01:14:49] Completed 7%
[01:18:25] Completed 8%
[01:22:02] Completed 9%
[01:25:38] Completed 10%
[01:29:14] Completed 11%
[01:32:50] Completed 12%
[01:36:27] Completed 13%
[01:40:03] Completed 14%
[01:43:40] Completed 15%
[01:47:16] Completed 16%
[01:50:52] Completed 17%
[01:54:29] Completed 18%
[01:58:05] Completed 19%
[02:01:42] Completed 20%
[02:05:18] Completed 21%
[02:08:54] Completed 22%
[02:12:30] Completed 23%
[02:16:07] Completed 24%
[02:19:43] Completed 25%
[02:23:19] Completed 26%
[02:26:56] Completed 27%
[02:30:32] Completed 28%
[02:34:09] Completed 29%
[02:37:45] Completed 30%
[02:41:21] Completed 31%
[02:44:58] Completed 32%
[02:48:34] Completed 33%
[02:52:10] Completed 34%
[02:55:47] Completed 35%
[02:59:23] Completed 36%
[03:02:59] Completed 37%
[03:06:36] Completed 38%
[03:10:12] Completed 39%
[03:13:48] Completed 40%
[03:17:24] Completed 41%
[03:21:01] Completed 42%
[03:24:37] Completed 43%
[03:28:14] Completed 44%
[03:31:50] Completed 45%
[03:35:26] Completed 46%
[03:39:03] Completed 47%
[03:42:39] Completed 48%
[03:46:16] Completed 49%
[03:49:52] Completed 50%
[03:53:28] Completed 51%
[03:57:05] Completed 52%
[04:00:41] Completed 53%
[04:04:17] Completed 54%
[04:07:54] Completed 55%
[04:11:30] Completed 56%
[04:15:06] Completed 57%
[04:18:43] Completed 58%
[04:22:19] Completed 59%
[04:25:55] Completed 60%
[04:29:32] Completed 61%
[04:33:08] Completed 62%
[04:36:45] Completed 63%
[04:40:21] Completed 64%
[04:43:57] Completed 65%
[04:47:33] Completed 66%
[04:51:10] Completed 67%
[04:54:46] Completed 68%
[04:58:23] Completed 69%
[05:01:59] Completed 70%
[05:05:35] Completed 71%
[05:09:11] Completed 72%
[05:12:48] Completed 73%
[05:16:24] Completed 74%
[05:20:00] Completed 75%
[05:23:37] Completed 76%
[05:27:13] Completed 77%
[05:30:50] Completed 78%
[05:34:26] Completed 79%
[05:38:02] Completed 80%
[05:41:38] Completed 81%
[05:45:15] Completed 82%
[05:48:51] Completed 83%
[05:52:27] Completed 84%
[05:56:04] Completed 85%
[05:59:40] Completed 86%
[06:02:32] + Working...
[06:03:16] Completed 87%
[06:06:53] Completed 88%
[06:10:29] Completed 89%
[06:14:06] Completed 90%
[06:17:42] Completed 91%
[06:21:18] Completed 92%
[06:24:55] Completed 93%
[06:28:31] Completed 94%
[06:32:07] Completed 95%
[06:35:44] Completed 96%
[06:39:20] Completed 97%
[06:42:57] Completed 98%
[06:46:33] Completed 99%
[06:50:09] Completed 100%
[06:50:10] Finished fah_main
[06:50:10] 
[06:50:10] Successful run
[06:50:10] DynamicWrapper: Finished Work Unit: sleep=10000
[06:50:19] Reserved 2452108 bytes for xtc file; Cosm status=0
[06:50:19] Allocated 2452108 bytes for xtc file
[06:50:19] - Reading up to 2452108 from "work/wudata_02.xtc": Read 2452108
[06:50:19] Read 2452108 bytes from xtc file; available packet space=783978356
[06:50:19] xtc file hash check passed.
[06:50:19] Reserved 76080 76080 783978356 bytes for arc file=<work/wudata_02.trr> Cosm status=0
[06:50:19] Allocated 76080 bytes for arc file
[06:50:19] - Reading up to 76080 from "work/wudata_02.trr": Read 76080
[06:50:19] Read 76080 bytes from arc file; available packet space=783902276
[06:50:19] trr file hash check passed.
[06:50:19] Allocated 544 bytes for edr file
[06:50:19] Read bedfile
[06:50:19] edr file hash check passed.
[06:50:19] Allocated 120607 bytes for logfile
[06:50:19] Read logfile
[06:50:19] GuardedRun: success in DynamicWrapper
[06:50:19] GuardedRun: done
[06:50:19] Run: GuardedRun completed.
[06:50:23] + Opened results file
[06:50:23] - Writing 2649851 bytes of core data to disk...
[06:50:24] Done: 2649339 -> 2494061 (compressed to 94.1 percent)
[06:50:24]   ... Done.
[06:50:25] DeleteFrameFiles: successfully deleted file=work/wudata_02.ckp
[06:50:27] Shutting down core 
[06:50:27] 
[06:50:27] Folding@home Core Shutdown: FINISHED_UNIT
[06:50:30] CoreStatus = 64 (100)
[06:50:30] Sending work to server
[06:50:30] Project: 11179 (Run 13, Clone 78, Gen 1)
[06:50:30] - Read packet limit of 540015616... Set to 524286976.
[06:50:30] + Attempting to send results [November 5 06:50:30 UTC]
[06:50:30] Gpu type=2 species=30.
[06:50:49] + Results successfully sent
[06:50:49] Thank you for your contribution to Folding@Home.
[06:50:49] + Number of Units Completed: 580
[06:50:53] - Preparing to get new work unit...
[06:50:53] Cleaning up work directory
[06:50:53] + Attempting to get work packet
[06:50:53] Passkey found
[06:50:53] Gpu type=2 species=30.
[06:50:53] - Connecting to assignment server
[06:50:54] - Successful: assigned to (171.64.65.71).
[06:50:54] + News From Folding@Home: Welcome to Folding@Home
[06:50:54] Loaded queue successfully.
[06:50:54] Gpu type=2 species=30.
[06:50:56] + Closed connections
[06:50:56] 
[06:50:56] + Processing work unit
[06:50:56] Core required: FahCore_11.exe
[06:50:56] Core found.
[06:50:56] Working on queue slot 03 [November 5 06:50:56 UTC]
[06:50:56] + Working ...
[06:50:56] 
[06:50:56] *------------------------------*
[06:50:56] Folding@Home GPU Core
[06:50:56] Version 1.31 (Tue Sep 15 10:57:42 PDT 2009)
[06:50:56] 
[06:50:56] Compiler  : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86 
[06:50:56] Build host: amoeba
[06:50:56] Board Type: Nvidia
[06:50:56] Core      : 
[06:50:56] Preparing to commence simulation
[06:50:56] - Looking at optimizations...
[06:50:56] DeleteFrameFiles: successfully deleted file=work/wudata_03.ckp
[06:50:56] - Created dyn
[06:50:56] - Files status OK
[06:50:56] - Expanded 81865 -> 421543 (decompressed 514.9 percent)
[06:50:56] Called DecompressByteArray: compressed_data_size=81865 data_size=421543, decompressed_data_size=421543 diff=0
[06:50:56] - Digital signature verified
[06:50:56] 
[06:50:56] Project: 10111 (Run 375, Clone 0, Gen 89)
[06:50:56] 
[06:50:56] Assembly optimizations on if available.
[06:50:56] Entering M.D.
[06:51:02] Tpr hash work/wudata_03.tpr:  2503309611 620578478 1696165423 3006627093 2146588002
[06:51:02] 
[06:51:02] Calling fah_main args: 14 usage=100
[06:51:02] 
[06:51:03] Working on 1174 p10111_ubiquitin_300K
[06:51:06] Client config found, loading data.
[06:51:06] Starting GUI Server
[06:52:27] Completed 1%
[06:53:49] Completed 2%
[06:55:10] Completed 3%
[06:56:32] Completed 4%
[06:57:54] Completed 5%
[06:59:15] Completed 6%
[07:00:37] Completed 7%
[07:01:58] Completed 8%
[07:03:20] Completed 9%
[07:04:41] Completed 10%
[07:06:03] Completed 11%
[07:07:25] Completed 12%
[07:08:46] Completed 13%
[07:10:08] Completed 14%
[07:11:29] Completed 15%
[07:12:51] Completed 16%
[07:14:13] Completed 17%
[07:15:34] Completed 18%
[07:16:56] Completed 19%
[07:18:17] Completed 20%
[07:19:39] Completed 21%
[07:21:00] Completed 22%
[07:22:22] Completed 23%
[07:23:44] Completed 24%
[07:25:05] Completed 25%
[07:26:39] Completed 26%
[07:28:01] Completed 27%
[07:29:22] Completed 28%
[07:30:44] Completed 29%
[07:32:05] Completed 30%
[07:33:27] Completed 31%
[07:34:49] Completed 32%
[07:36:10] Completed 33%
[07:37:32] Completed 34%
[07:38:53] Completed 35%
[07:40:15] Completed 36%
[07:41:36] Completed 37%
[07:42:58] Completed 38%
[07:44:20] Completed 39%
[07:45:41] Completed 40%
[07:47:03] Completed 41%
[07:48:24] Completed 42%
[07:49:46] Completed 43%
[07:51:07] Completed 44%
[07:52:29] Completed 45%
[07:53:51] Completed 46%
[07:55:12] Completed 47%
[07:56:34] Completed 48%
[07:57:55] Completed 49%
[07:59:17] Completed 50%
[08:00:38] Completed 51%
[08:02:00] Completed 52%
[08:03:22] Completed 53%
[08:04:43] Completed 54%
[08:06:05] Completed 55%
[08:07:26] Completed 56%
[08:08:48] Completed 57%
[08:10:10] Completed 58%
[08:11:31] Completed 59%
[08:12:53] Completed 60%
[08:14:14] Completed 61%
[08:15:36] Completed 62%
[08:16:58] Completed 63%
[08:18:19] Completed 64%
[08:19:41] Completed 65%
[08:21:02] Completed 66%
[08:22:24] Completed 67%
[08:23:46] Completed 68%
[08:25:07] Completed 69%
[08:26:29] Completed 70%
[08:27:51] Completed 71%
[08:29:12] Completed 72%
[08:30:34] Completed 73%
[08:31:56] Completed 74%
[08:33:17] Completed 75%
[08:34:39] Completed 76%
[08:36:00] Completed 77%
[08:37:22] Completed 78%
[08:38:43] Completed 79%
[08:40:05] Completed 80%
[08:41:27] Completed 81%
[08:42:48] Completed 82%
[08:44:10] Completed 83%
[08:45:31] Completed 84%
[08:46:53] Completed 85%
[08:48:15] Completed 86%
[08:49:36] Completed 87%
[08:50:58] Completed 88%
[08:52:19] Completed 89%
[08:53:41] Completed 90%
[08:55:02] Completed 91%
[08:56:24] Completed 92%
[08:57:46] Completed 93%
[08:59:07] Completed 94%
[09:00:29] Completed 95%
[09:01:50] Completed 96%
[09:03:12] Completed 97%
[09:04:34] Completed 98%
[09:05:55] Completed 99%
[09:07:17] Completed 100%
[09:07:17] Successful run
[09:07:17] DynamicWrapper: Finished Work Unit: sleep=10000
[09:07:27] Reserved 94820 bytes for xtc file; Cosm status=0
[09:07:27] Allocated 94820 bytes for xtc file
[09:07:27] - Reading up to 94820 from "work/wudata_03.xtc": Read 94820
[09:07:27] Read 94820 bytes from xtc file; available packet space=786335644
[09:07:27] xtc file hash check passed.
[09:07:27] Reserved 28296 28296 786335644 bytes for arc file=<work/wudata_03.trr> Cosm status=0
[09:07:27] Allocated 28296 bytes for arc file
[09:07:27] - Reading up to 28296 from "work/wudata_03.trr": Read 28296
[09:07:27] Read 28296 bytes from arc file; available packet space=786307348
[09:07:27] trr file hash check passed.
[09:07:27] Allocated 560 bytes for edr file
[09:07:27] Read bedfile
[09:07:27] edr file hash check passed.
[09:07:27] Allocated 10780 bytes for logfile
[09:07:27] Read logfile
[09:07:27] GuardedRun: success in DynamicWrapper
[09:07:27] GuardedRun: done
[09:07:27] Run: GuardedRun completed.
[09:07:31] + Opened results file
[09:07:31] - Writing 134968 bytes of core data to disk...
[09:07:31] Done: 134456 -> 128550 (compressed to 95.6 percent)
[09:07:31]   ... Done.
[09:07:31] DeleteFrameFiles: successfully deleted file=work/wudata_03.ckp
[09:07:31] Shutting down core 
[09:07:31] 
[09:07:31] Folding@home Core Shutdown: FINISHED_UNIT
[09:07:34] CoreStatus = 64 (100)
[09:07:34] Sending work to server
[09:07:34] Project: 10111 (Run 375, Clone 0, Gen 89)
[09:07:34] - Read packet limit of 540015616... Set to 524286976.


[09:07:34] + Attempting to send results [November 5 09:07:34 UTC]
[09:07:34] Gpu type=2 species=30.
[09:07:36] + Results successfully sent
[09:07:36] Thank you for your contribution to Folding@Home.
[09:07:36] + Number of Units Completed: 581

[09:07:40] - Preparing to get new work unit...
[09:07:40] Cleaning up work directory
[09:07:40] + Attempting to get work packet
[09:07:40] Passkey found
[09:07:40] Gpu type=2 species=30.
[09:07:40] - Connecting to assignment server
[09:07:40] - Successful: assigned to (171.64.65.71).
[09:07:40] + News From Folding@Home: Welcome to Folding@Home
[09:07:41] Loaded queue successfully.
[09:07:41] Gpu type=2 species=30.
[09:07:42] + Closed connections
 
Last edited:
p11179 is an OPENMM WU and p10111 is not.

Production on an 8800GTS (G92) (Only rig I have to have run both) is almost identical on the two WUs and both are producing about 1200 ppd (18%) less than other FahCore_11 WUs.
 
Here's a odd question... Why, with multiple CPU boards out now (xeon, i7 type sr-2 mobos), you can have one instance of F@H folding across a stupid crazy amount of threads like 16 or 32 threads or whatever..., but for multiple GPUs in some of our rigs, you have to have a separate instance of F@H per GPU? The way I see it, stacking a gpu on an other would reduce TPF by 50% and make install easier. So same amount of work would get done.

I guess that is a question more for Pande Group...
 
I doubt there will ever be a folding core that works on 2 gpus simultaneously.
communication between gpus is only possible via the MB chipset, and that isn't as fast as onboard gpu card memory.
it could be done, but performance gain would be 30% . . . so it's better to leave 1 instance of fold core per gpu.
it can be done for cpus since i7 got QPI which is enough for core communications and other stuff
(correct me if i am wrong on this post)
 
I'm pretty darn sure the multi-core single pcb gpus that don't rely on SLI or crrossfire X can't far away and someone will surely adapt OpenMP or something similar to take advantage of parallelism. The problem will come in putting enough memory on the cards to make parallelism practical.
 
From what I can find, GDDR4 at 1.6Ghz is 3.2Gbits/s. One PCI-E 2.0 16x slot can handle 8GBytes/s (128Gbits/s). So there is plenty of room for linking multiple GPU's to each other with common mainstream tech and no need for any sli/crossfire bridge.

EDIT: Ug I always miss some details. The 3.2Gbits/s is per pin. I'm seeing 12.8GB/s per module listed on wiki. So yeah I guess PCI-E is too slow, but then again the app may not need the full speed of the VRAM between cores.
 
Last edited:
communication between cores < communication inside the core

single PCB gpus might be ok for this solution, but i don't think the gain will be anywhere similar to 2 clients running on separate gpus.

2 cards on 2 different pci slots is even worse, consider that bandwidth bewtween core and memory is 140gb/s . . . although folding doesn't require so much it would still be a performance hit.
for example, when games run out of memory on the gpu, the game doesn't crash anymore, instead they start using system ram, and fps goes down down down . . .

gpus are already multithreaded cores, multithreaded gpus is just not as efficient sorry.
 
I have been getting some interesting 10k series units.
The 10930 loads my GPU at 99% as usual, but my temps are 20C cooler than normal, and my PPD are almost double when processing this unit. Does anyone know why? Been getting them for over a week now...
 
Here's my favorite WU in the range, p10505. Notice the 9600 GSO @ 1826 shaders pwns my GTX260 @ 1556 shaders
Code:
Project ID: 10505
 Core: GROGPU2
 Credit: 451
 Frames: 100


 Name: ChasR GTX260
 Path: E:\FAH GPU\GPU1
 Number of Frames Observed: 300

 Min. Time / Frame : 00:00:04 - 97,416 PPD
 Avg. Time / Frame : 00:00:55 - 7,085 PPD


 Name: JRB 8800GT
 Path: \\Abco-tysamail\JRB GPU\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:00:47 - 8,291 PPD
 Avg. Time / Frame : 00:00:49 - 7,952 PPD


 Name: MARIE B 9600GSO
 Path: \\Abco-cahill\FAH\GPU\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:00:48 - 8,118 PPD
 Avg. Time / Frame : 00:00:50 - 7,793 PPD
 Cur. Time / Frame : 00:00:49 - 7,952 PPD
 R3F. Time / Frame : 00:00:49 - 7,952 PPD
 All  Time / Frame : 00:00:49 - 7,952 PPD
 Eff. Time / Frame : 00:00:50 - 7,793 PPD

I think you should ignore the 0:04/frame minimum on the GTX260. I have no idea how it got there, but it would be nice if it really were a regular occurrence. :D
 
Here's my favorite WU in the range, p10505. Notice the 9600 GSO @ 1826 shaders pwns my GTX260 @ 1556 shaders

I haven't gotten one on either of my 9600GSOs yet, but p10505 is apparently one of those that shader clocks beat number of shaders.
Code:
 Name: Dan GTX250  [COLOR="Lime"]shaders @ 1620[/COLOR]
 Path: \\DANCOMPUTER\Folding GPU\
 Number of Frames Observed: 198

 Min. Time / Frame : 00:00:56 - 6,958 PPD
 Avg. Time / Frame : 00:00:57 - 6,836 PPD


 Name: Outback GTX260-2  [COLOR="Lime"]shaders @ 1404[/COLOR]
 Path: K:\Folding GPU0\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:00:55 - 7,085 PPD
 Avg. Time / Frame : 00:01:00 - 6,494 PPD
 
Back