• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

The landscape sure has changed

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Roisen

Member
Joined
Feb 13, 2007
Location
Folding in Ames, IA
I've been away for a while. When I left some guy over on the lunatics forum was hacking together the first GPU exe, and the V8 exe had just been released.

Since then it looks like we've gotten an official gpu client which most of my questions are about.

  • Are there special GPU WUs or do they run the regular CPU WUs?
  • Does each GPU require a dedicated CPU core to feed it?
  • I don't seem to be getting any GPU work (assuming they have special WUs). Is this common?
  • Do I want to use the killVLAR optimized GPU exe?
  • What's a VLAR?
 
I've been away for a while. When I left some guy over on the lunatics forum was hacking together the first GPU exe, and the V8 exe had just been released.

Since then it looks like we've gotten an official gpu client which most of my questions are about.

  • Are there special GPU WUs or do they run the regular CPU WUs?
  • Does each GPU require a dedicated CPU core to feed it?
  • I don't seem to be getting any GPU work (assuming they have special WUs). Is this common?
  • Do I want to use the killVLAR optimized GPU exe?
  • What's a VLAR?

Wow, you have been gone a while. I'll try to answer these to the best of my knowledge.

Are there special GPU WUs or do they run the regular CPU WUs?
Sort of. They are basically the same work, but the client assigns them to one or the other (I think). There is a program that you can use to change them back and forth in case you run out of GPU work or something.

Does each GPU require a dedicated CPU core to feed it?
Depends, I use one core to feed my 9800gx2 which is technically 2 cards. A lot of people just leave one core free to feed however many cards they have. As far as I can tell, this seems to be the best way.

I don't seem to be getting any GPU work (assuming they have special WUs). Is this common?
SETI only does GPU work on CUDA enabled cards (newer Nvidia cards only). Do you have a CUDA card? If so, when you first open BOINC manager and go to the messages tab does it say something like:
NVIDIA GPU 0: GeForce 9800 GX2 (driver version 19745, CUDA version 3000, compute capability 1.1, 512MB, 448 GFLOPS peak)
NVIDIA GPU 1: GeForce 9800 GX2 (driver version 19745, CUDA version 3000, compute capability 1.1, 512MB, 448 GFLOPS peak)


Do I want to use the killVLAR optimized GPU exe?
Yes. The vlars can take forever on a GPU, but run at normal speed on a CPU.

What's a VLAR?
Its Very Low Angle Range work units. There is a thread explaining how they slow down you GPU here.
 
Thanks for getting me back up to speed.

After running the rescheduler I was finally able to get my GPUs working on something. It looks like the two of them together use 12% of my CPU in total, so I'm going to go ahead and leave a core free for them.

So if VHARs and VLARs are terrible for GPUs, then does that mean the middle angle ranges are the best? Is there some way that I can tweak the rescheduler to give these middle units priority to my GPUs? For example, given a selection units it should be best to give my GPUs the units that fall in the middle angle ranges while having my CPU work on the extreme angle ranges, and then have the GPUs work from the middle out and the CPU from the outside in.

Eh, maybe I'm asking for too much.

Ah, I see that there is a config.xml that comes with the rescheduler, and it looks like it has options for what I described above. But I've gotten a good supply of units now and it looks like I'll be able to keep my CPU fed on VHARs and VLARs exclusively anyway, so the point is moot.
 
Last edited:
I run the vlarkill and the rescheduler. Sometimes I forget to run the rescheduler, so the vlarkill takes care of anything I miss. The vhars and vlars are usually enough to keep my one cpu core fed. As far as optimal ranges, I haven't heard of anyone getting huge gains narrowing the range for the gpu app down. I'm sure if it mattered much, someone would have posted on the seti boards about it by now.
 
Should I overclock my GPU's shader or core clock? Will overclocking my GPU's RAM help at all? What about my system RAM?
 
From what I've seen, shader clocks have the biggest impact. AFAIK, when the GPU crunching first came out, core and ram clocks both had small impacts compared to the shader clock. This is probably still the case, but I haven't tested it in a while. If crunching on GPUs, I doubt system ram speeds will have any effect. It should help a little for CPU crunching, but not as much as processor overclocks. Don't sacrifice speed on the CPU for RAM speed.
 
Back