- Joined
- Jan 27, 2011
- Location
- Beautiful Sunny Winfield
I'm new to FAH and have gotten the SMP CPU client and the Windows GPU client running on my Linux box. Now I'm wondering about some of the options that the FAH config asks about and if they will affect throughput.
There is a choice for memory:
Is bigger better? I've gone with big and the memory footprint seems to be pretty small. (Smaller than X and the chromium browser.)
Then there are the advanced options:
Which, if any, of those will affect the points I earn and how should they be set?
If the answers to the above questions differ between Windows, Linux and Linux as VM guest on Windows, please specify.
As for Linux, I'm running Ubuntu 10.04 64 bit desktop. While working with Rosetta/CPU and SETI/GPU I found the best balance of performance was gained by running a real time (RT) kernel. I also tried the CK kernel but without further tuning it seemed not to produce as well for those tasks as the RT kernel. I also ran 2 instances of SETI which proved beneficial. The RT kernel kept the GPU well fed without having to sacrifice a CPU core to do so. The price paid was an approximate 15% reduction in CPU throughput due to the extra overhead in the RT kernel. I plan to do some testing of different kernels with FAH to see if there is anything better than the RT kernel. However, since my GPU (standard clocked GTX 460) crunches about 4-5 times as fast as my CPU (overclocked Phenom II x4 820) it seems like a small reduction in CPU performance to keep the GPU at 95% is reasonable.
Feel free to share any knowledge you have of legal ways to maximize points (or point to the appropriate resource.)
thanks,
hank
There is a choice for memory:
Code:
Acceptable size of work assignment and work result packets (bigger units
may have large memory demands) -- 'small' is <5MB, 'normal' is <10MB, and
'big' is >10MB (small/normal/big) [big]?
Is bigger better? I've gone with big and the memory footprint seems to be pretty small. (Smaller than X and the chromium browser.)
Then there are the advanced options:
Code:
Change advanced options (yes/no) [no]? yes
Core Priority (idle/low) [idle]?
CPU usage requested (5-100) [100]?
Disable highly optimized assembly code (no/yes) [no]?
Pause if battery power is being used (useful for laptops) (no/yes) [no]?
Interval, in minutes, between checkpoints (3-30) [15]?
Memory, in MB, to indicate (3960 available) [3960]?
Set -advmethods flag always, requesting new advanced
scientific cores and/or work units if available (no/yes) [no]?
Ignore any deadline information (mainly useful if
system clock frequently has errors) (no/yes) [no]?
Machine ID (1-16) [2]?
Launch automatically, install as a service in this directory (yes/no) [no]?
The following options require you to restart the client before they take effect
Disable CPU affinity lock (no/yes) [no]?
Additional client parameters []?
IP address to bind core to (for viewer) []?
If the answers to the above questions differ between Windows, Linux and Linux as VM guest on Windows, please specify.
As for Linux, I'm running Ubuntu 10.04 64 bit desktop. While working with Rosetta/CPU and SETI/GPU I found the best balance of performance was gained by running a real time (RT) kernel. I also tried the CK kernel but without further tuning it seemed not to produce as well for those tasks as the RT kernel. I also ran 2 instances of SETI which proved beneficial. The RT kernel kept the GPU well fed without having to sacrifice a CPU core to do so. The price paid was an approximate 15% reduction in CPU throughput due to the extra overhead in the RT kernel. I plan to do some testing of different kernels with FAH to see if there is anything better than the RT kernel. However, since my GPU (standard clocked GTX 460) crunches about 4-5 times as fast as my CPU (overclocked Phenom II x4 820) it seems like a small reduction in CPU performance to keep the GPU at 95% is reasonable.
Feel free to share any knowledge you have of legal ways to maximize points (or point to the appropriate resource.)
thanks,
hank