• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Core Performace Guide

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
walaka7 said:
I did all but three frames of that one on the dothan and it was a dedicated run, no playing around with goodies. I wanted convergance to have little effect on the overall picture as the frame times are considerably shorter. And i concur on the bandwidth limitations as it appears to be the issue. Im not sure how much more testing im gonna do on the dothan and QMD's as you can't just get them on their own, they have to be piped or in my case already there with the proc swap. Ram timing was not the tightest.. 2-3-3-6 but the testing was done on a 533 version of the dothan and memory was @ 200 mhz


Oh and yes SSE2 was enabled :)

I really meant steps not frames. QMDs all have 100 frames but the number of total steps and the steps per frame vary. The time per step is consistent after convergence (actually it's pretty consistent preconvergence but about 25% faster than postconvergence steps). All QMDs start with 2000 steps but the total number caclulated prior to convergence. Some wind up with close to 2000 steps, most wind up with about 2070 steps, but some wind up with nearly 2200 steps. On a 2070 step WU 10 frames will have 207 steps meaning 7 of those frames have 21 steps and 3 will have 20 steps. The 20 step frames will take 5% less time to complete than the 21 step frames. A 2200 step WU would converge in frame 9 and take about 10% longer than a 2000 step WU. PPD would be about 10% less. That's why I asked how many steps, to see if you had an average, short or long QMD.
I wondered if SSE2 would be enabled since the client misidentifies the Dothan as a P3 but you have answered my question.

Here's a sample from the log.

Code:
A 2068 step QMD
[03:18:23] - Number of total steps will change until convergence
[03:19:28] Completed 0 out of 2000 steps  (0)
[03:35:13] Completed 21 out of 2021 steps  (1)
[03:35:13] Writing local files
[03:50:16] Completed 41 out of 2041 steps  (2)
[03:50:16] Writing local files
[04:06:12] Completed 62 out of 2062 steps  (3)
[04:06:12] Writing local files
[04:10:48] Timered checkpoint triggered.
[04:10:52] WF converged, jumping to MD
[04:10:52] Verifying checksum
[04:10:52] Finished
[04:11:53] Completed 68 out of 2068 steps  (3)
[04:26:49] Completed 83 out of 2068 steps  (4)
[04:26:49] Writing local files 
[04:47:46] Completed 104 out of 2068 steps  (5)
[04:47:46] Writing local files 
[05:08:42] Completed 125 out of 2068 steps  (6)
[05:08:42] Writing local files 
[05:28:39] Completed 145 out of 2068 steps  (7)

A 2192 step QMD
[16:58:53] - Number of total steps will change until convergence
[16:59:40] Completed 0 out of 2000 steps  (0)
[17:10:19] Completed 21 out of 2021 steps  (1)
[17:10:19] Writing local files
[17:20:32] Completed 41 out of 2041 steps  (2)
[17:20:32] Writing local files
[17:31:10] Completed 62 out of 2062 steps  (3)
[17:31:10] Writing local files
[17:42:25] Completed 84 out of 2084 steps  (4)
[17:42:25] Writing local files
[17:53:33] Completed 106 out of 2106 steps  (5)
[17:53:33] Writing local files
[17:59:53] - Autosending finished units...
[17:59:53] Trying to send all finished work units
[17:59:53] + No unsent completed units remaining.
[17:59:53] - Autosend completed
[18:04:50] Completed 128 out of 2128 steps  (6)
[18:04:50] Writing local files
[18:16:33] Completed 151 out of 2151 steps  (7)
[18:16:33] Writing local files
[18:28:15] Completed 174 out of 2174 steps  (8)
[18:28:15] Writing local files
[18:37:21] Timered checkpoint triggered.
[18:37:24] WF converged, jumping to MD
[18:37:24] Verifying checksum
[18:37:24] Finished
[18:38:06] Completed 192 out of 2192 steps  (8)
[18:42:08] Completed 198 out of 2192 steps  (9)
[18:42:08] Writing local files 
[18:56:57] Completed 220 out of 2192 steps  (10)
[18:56:57] Writing local files 
[19:11:45] Completed 242 out of 2192 steps  (11)
[19:11:45] Writing local files 
[19:26:33] Completed 264 out of 2192 steps  (12)
[19:26:33] Writing local files
 
Last edited:
Oh yeah that too LOL Sorry i misunderstood. it was a 2068 step wu. To be honest i dont know if im gonna do any more QMD's as I no longer have a celly to get them. ALthough im thinking of getting another cheapo intel rig and let it fold whilst i ponder another dothan. In which case i will have more QMD's to play with it.
 
Last edited:
back to stock settings... athlon64 3200+, 1Gb PC3200... etc...

i have a couple more log files for ya
 

Attachments

  • FAHlog-Prev.txt
    190.5 KB · Views: 41
  • FAHlog.txt
    63.6 KB · Views: 56
my wife's rig...

Socket A
Sempron 2400+ OC'd to 1.807mhz

768Mbs of PC2700 DDR

ps... my a64 is a socket 754...
 

Attachments

  • FAHlog.txt
    39.7 KB · Views: 62
  • FAHlog-Prev.txt
    93 KB · Views: 55
Are you still looking for logs/data? ... i have 5 P4's folding and after reading nikhsub1 and ChasR's recomendantions/comments i have shut down the BP running on 2 of them leaving just the QMD's ... on my main rig i see it speed up immediately ... on the other letting it run a bit to get averages (it's my slowest p4). My main rig logs will be volatile for data since i use it during the day for work stuff etc ...
But 2 or my rigs only fold so they should have consistent data
And some day i am going to figure out the mysteries of memory timings since I suspect my ocz EB 3500, 3700 and EL 4000 can probably produce more bandwidth.

Since bandwitdth seems so important ... is my latest rig .. a 2.4c at 270 fsb 5:4 geting leess throughput than running it at a lower clock with mem 1:1? It is actually running my fastest mem ocs el4000 rev 2 since i built it last ... but have played with it the least.
 
Last edited:
Yeah Im still taking in data for this. RL has been a bit of a PITA. But Im still working on it when i can. After next week things will settle down a bit until November. SO that leaves some weeks to make things happen :D

Ok some Newcastle data thanks to Who
 
Last edited:
some more logs for ya!

This is from my work lappy.

1.6 Ghz Centrino
256Mbs RAM
WinXP Pro

enjoy!
 

Attachments

  • FAHlog-Prev.txt
    49.7 KB · Views: 47
  • FAHlog.txt
    48.7 KB · Views: 58
I have been shuffling components around the barns to accommodate new folding animals ... have been learning a bit more about tuning for folding too ... once things stabilize i will contibute (sometime next week I hope).
 
Hmmm there's gotta be a lot of units around worth less than 50ppd/ghz on AMDs because looking at my records, during a month I had slightly over 7ghz running, I only got an average of 40ppd/ghz and my highest producing week recently with one rig averaged at 48ppd/ghz. Whenever I see actual frame times given on specific units, my rigs seem to match them, but whenever anyone starts quoting ppd they always seem way off. I remember it taking 3 AMD rigs running to stay solidly dark blue (200ppd) on EOC stats and 4 (all 7ghz) didn't get me green (400ppd) all the time, it just flickered on green for a couple of updates every few days.

The project 244 running at the moment I figure is only giving 45ppd/ghz
 
How do i track a WU and long it takes exactly...

i just formated and downloading 2 QMD's and 2 regular's ( instance 1 and 3 are big packet's - 2/4 are regular.

Dual xeon 2.2 + HT with 1g (4 x 256 ) PC800 Rambus

FAH.JPG
 
Last edited:
my logs if they help

Dual Xeon 2.2's + HT / 1g 4 x 256 PC800 Rambus 2 x 18g 15k RPM Ultra160 SCSI in raid 0
 

Attachments

  • FAHlogs.zip
    14.7 KB · Views: 37
Mr Guv ... the only tool I have found for tracking WUs is EM III ... once a WU has been running a while (eg. QMD's after convergence), it wil give u elapsed time, time to finish, time per frame.

I would like to find something that tracks historically by folding instance since it would help in optimizing the machine tuning for folding.

I have thought about doing some programming to do this ... maybe over xmas.

The is another tool called FahMon but it doesn't project ETA's for QMD's which is about all I am folding ATM. It is lighter weight and less function than EM III ... I use it as well since it can be set up to not poll all the machines every 2.5 or 5 sec like EM III does.
 
Mr.Guvernment said:
i just formated and downloading 2 QMD's and 2 regular's ( instance 1 and 3 are big packet's - 2/4 are regular.

Dual xeon 2.2 + HT with 1g (4 x 256 ) PC800 Rambus

You'll get far more ppd by folding one QMD and one non-QMD big WU than you will folding four instances with two QMDs. When big WUs are in short supply and you get a small Gromac, crank up a third instance as a timeless WU. No need to undo the quad install, in fact it gives you the ultimate flexibility to manage WUs. Finish the WUs on two of the instances with the -oneunit flag set in the registry and then disable or set the services to manual after completion of the WUs.
 
Back