• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

A guide to folding under Linux

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Okay, I broke it again. I did a much needed full re-installed with Karmic Ubuntu Server. I then followed the instructions on the HFM site, and got the

Code:
Encoding name 'Windows-1252' not supported

error again. A search on this error revealed we need to install the libmono-i18n2.0-cil package as well.



Do not use 9.10, the GPU performance is awful when the CPU client is running, I am currently trying to figure it out. Use 9.04 for maximum performance

I'm not sure this is much of a problem any more. I'll report more info once I have run through one or two SMP wu's. After the re-install I accidentally started a uni wu and figured I would just finish it (in the name of SCIENCE!!) then when I started an SMP wu it gave me an A1 instead of the A3's I had been getting. So I want to see a better mix first.

Possible components that are helping out
-Updated kernel
-Server scheduler
-Nvidia driver 195.30
-Setting GPU priority higher than SMP priority

And I noticed a correction that needs to be made, I've seen many people in the folding forums making this mistake.

In case you were wondering, the reason that I'm using the 7.10 toolkit is because the GPU2 wrapper was written for that version of the toolkit, newer toolkits do not work.

The GPU2 client and the wrapper are coded for compatibility with CUDA 2.0. But the toolkit 2.3 (current) works fine. I tried the beta toolkit 3.0 but I had linking issues due to file name changes, libcudart.so.2 is now libcudart.so.3. Someone mentioned just making a link named libcudart.so.2 linking to libcudart.so.3, but I had no luck with that and downgraded to 2.3. If Nvidia didn't break any backward compatibility then the wrapper should work fine with CUDA 3.0, the main issue is when Stanford changes which toolkit FAH is built for, that is what can break the wrapper not so much the toolkit itself.

In the end as long as your are using a toolkit that has the correct features (2.0+) then the wrapper will work.

The GPU3 client with openCL could cause many problems for us though if not immediately ported to Linux.
 
HFM.net on console only server?

hi,
I'm guessing that using Mono probably means X is running and you would view HFM.net in that? Any chance that HFM could work without the GUI, just creating html pages for viewing?

My situation is that I'm using a Debian box remotely, and only have console access. Any hints on how to do this would be helpful. (I could do the HFM config on a box with GUI, make sure it works and then move the files over)
thanks!

PS - your guide looks great
 
hi,
I'm guessing that using Mono probably means X is running and you would view HFM.net in that? Any chance that HFM could work without the GUI, just creating html pages for viewing?

My situation is that I'm using a Debian box remotely, and only have console access. Any hints on how to do this would be helpful. (I could do the HFM config on a box with GUI, make sure it works and then move the files over)
thanks!

PS - your guide looks great

You could set up HFM on your box with a GUI, then point to the f@h directory on the remote box.
 
Are these instructions still completely valid? Anyone with high Linux ninja (I'm looking at you Shelnutt or Zerix) want to take a crack at updating it?

...or should I just deprecate these instructions and just point people to the GPU2 in Wine wiki? Didn't it move? What's the new address?
 
www.linuxfah.info . My goal is to try and make this a general folding in linux wiki, not just the gpu client. However only the gpu info is up. I probably am not the best person to write a guide for ubuntu/debian specifically, but probably can get most of it. (I don't run it so I can't confirm/get screenshots of specifics) Who is running ubuntu 9.10 or 10.04?

Short term goal:
Lets update the wiki to have seperate pages for all the different clients

Long term goal:
Have generic pages for each client and then distro specific pages with screenshots.

What are your thoughts h?
 
No sure what my thoughts are yet... who owns linuxfah.info? Do you own it or are in full control?

My goal is to clean up our guides here on OCF, so dealing with this external information, even though it's maintained by a Team 32 member (I'm assuming) isn't the focus of what I'm trying to do.

Certainly if you/others want to build up-to-date, detailed guides and place them on linuxfah.info... I'll be more than happy to publish links to those guides from our stickies.

Again, my goal is "out with the old, in with the new"... and this Linux guide has served us well in the last six+ months. But since Sydney isn't around to update it anymore I was hoping to get a confirmation on whether this content is still valid. If it's not, it's out.
 
Yeah I own the domain (got a year lease for $1.06 after tax :D), and IMOG is hosting it.

Here is what I'm thinking, I'll update my wiki over the next few days, start getting everything organized and updated. Then I can just copy past the text over here, and update this as the wiki is updated. What do you think about that?

The info here is out dated and no good. About 50% of it is still right on, but most of the downloads are wrong, and as such the commands need to be updated to reflect the new downloads/software names. Also the gpu2 client is being phased out (as you know) and we have a new wrapper for the gpu3 client (it's backwards compatible but requires >=cuda 3.0)
 
OK... that's what I needed to know. I figured the commands to get things done might still be somewhat correct. But of course the software versions would be out-of-date by now- makes perfect sense.

I don't really think it's totally necessary to duplicate the information over here at OCF. I'm fine with just maintaining links to the guides posted on your site. Less work for you and me. :)
 
Is the information here and at http://linuxfah.info/index.php?title=Main_Page#Linking_The_Toolkit still current for running a GPU task on Linux?

I'm currently using the 260.19.29 drivers and have the 64 bit CUDA 3.2 toolkit installed. I'm using this to crunch SETI on my GPU and I'm reluctant to mess with that to get FAH running on my GPU (GTX 460) instead.

I wonder if it would be possible to run FAH on a VM guest running the correct version of the S/W. Anyone know anything about that?

thanks,
hank
 
Last edited:
Is the information here and at http://linuxfah.info/index.php?title=Main_Page#Linking_The_Toolkit still current for running a GPU task on Linux? No. And some of it is wrong. A better page seems to be http://www.gpu3.hostei.com/index.php/Main_Page. At least the links to the wrapper work.

I'm currently using the 260.19.29 drivers and have the 64 bit CUDA 3.2 toolkit installed. I'm using this to crunch SETI on my GPU and I'm reluctant to mess with that to get FAH running on my GPU (GTX 460) instead.

I wonder if it would be possible to run FAH on a VM guest running the correct version of the S/W. Anyone know anything about that?

thanks,
hank

At the moment it is running on my box. I have the 64 bit v3.2 toolkit installed in the default location and the 32 bit v3.0 toolkit installed in /usr/local/cuda32-30/cuda and I've created an alias to alter my path to point to the bin directory and add an LD_LIBRARY_PATH variable to point to the correct libraries for the 32 bit v3.0 toolkit. It seems to work well with the 260 drivers.

Right now I'm about 5 minutes into my first GPU work unit and at 4%. :D GPU utilization is running about 95% and the CPU program is running close to 345%.

I'm cautiously optimistic I have this working.
 
Check to make sure it's worth it. I've found with some setups you net 0% gain from the GPU client.
 
Check to make sure it's worth it. I've found with some setups you net 0% gain from the GPU client.
Do you mean that the CPU cycles sacrificed to keep the GPU well fed just offset the gains from the GPU? That was something I wrestled with when running SETI on the GPU and Rosetta on the CPU. With the stock kernel, the only to keep the GPU busy was to restrict Rosetta to using only 3 of 4 cores. I switched to a real time kernel and upped to two instances of SETI on the GPU. That kept the GPU relatively well fed and I could run Rosetta on all 4 CPU cores, albeit at a 15% drop in CPU throughput due to the extra overhead of the RT kernel.

My initial testing with FAH on both GPU and CPU is also with the RT kernel. I'm pleasantly surprised to find that a single instance on the GPU keeps it at about 95% utilization (GTX 460) while the CPU task runs at 340-380% CPU utilization. The GPU app uses 6-8% CPU and X uses about 10%.

The GPU is processing about 4-5 times as fast as the CPU on % completed. (I don't know if that translates into 4-5 times as many points.)

If you mean something else by zero gain, let me know!

thanks,
hank
 
It doesn't make any difference how many % cpu or gpu you're using. How many ppd does it make on the smp client running alone compared to running smp + gpu? That's all that matters. Optimize ppd, not % gpu and cpu. On your AMD x4, I imagine you will make more ppd running both since the 460 is going to make a lot more ppd than the x4.
You need a monitoring program (HFM) or a ppd calculator.
 
It doesn't make any difference how many % cpu or gpu you're using. How many ppd does it make on the smp client running alone compared to running smp + gpu? That's all that matters. Optimize ppd, not % gpu and cpu. On your AMD x4, I imagine you will make more ppd running both since the 460 is going to make a lot more ppd than the x4.
You need a monitoring program (HFM) or a ppd calculator.

I just reference % utilization to indicate that both CPU and GPU are at high utilization. I've been comparing the time to progress form the log file to get some idea of throughput, though I don't know if 1% on the CPU WU produces the same points as 1% on a GPU WU. At this point all I know is that the GPU crunches WUs a lot faster:

GPU: 10% progress in 21:21
CPU: 10% progress in 1:06:50

I tried HFM and it crashes spectacularly. I wrote some .NET code and found it hard to get to run on hosts other than the one on which I wrote it (all running the same version of Ubuntu) so I am not surprised. I'm happy to tail the log files and make sure the system is producing results (and successfully uploading them.)

I'd be interested in a tool that can track by passkey and is easily accessible on Linux.

thanks,
hank
 
Time per percent (Time Per Frame, TPF) means nothing unless you are comparing the TPF on the same WU or the WUs close relatives. They are all different. One SMP WU that takes 21:00 per frame produces 63,000 ppd, another SMP WU with a 5:00 TPF produces 9500 ppd, while a GPU WU that takes 40 seconds produces 9000 ppd. You need a monitoring program.

HFM is the best out there. It has literally thousands of users and an excellent support base. It runs fine on every flavor Windows I've tried it on. It is more difficult to get it to run on mono on Linux, but it can be done and several team mates have it running that way. If you need help on it, there are plenty of us around that are willing and ready to help.

AFAIK, there is no tool that tracks by passkey. They all track by user name. You can look at the Stanford stats and search by passkey and see the points and different usernames associated with the passkey.
 
Last edited:
The links work for linuxfah.info. I'm not sure why you couldn't download the wrappers. Also if you followed the "Wrapper Save Location", it should create symbolic links to the cudart.dll files compared to cudart32_30_14.dll.

Another page that someone added recently is, http://linuxfah.info/index.php?title=Folding@Home_GPU3_on_Fedora13_x86-64 . I don't know if that is any better.

If you don't mind though can you tell me what is wrong with the first page? I'd like to update it to make sure it works. I haven't seen anyone say it doesn't so I haven't changed anything. But if it doens't work then I need to update it.


As for hfm. Just make sure you have mono 2.6 or later and it should work like a charm. I've had it running since the start :).
 
Back