View Full Version : Windows or Linux?
04-23-11, 11:10 AM
Has anyone noticed a difference with Rosetta and either OS?
I do know about F@H running better on Linux and was wondering if the same rule applied with Rosetta.
Does one fold faster than the other, or is the difference negligible?
I want to squeeze as much RAC out of these systems as possible and was wondering if it would be worth the trouble to convert some of my systems to Linux....:shrug:
I'm actually testing that theory now.. Will take about another 2 weeks to see where it ends up.
Using a 1090T at 3.6 and 8 gigs mem i was at about 2700 RAC using Slackware for a month strait. That just didn't seem right to me..
Using a 1090T at 3.6 and 8 gigs mem i was at about 2700 RAC using Slackware for a month strait. That just didn't seem right to me..I agree. I was getting over 2200 on an 820 at 3.5.
I don't know about the difference between RAC for Linux vs. Windows. I tried measuring that for SETI at one point, but I had trouble getting work units and bagged that testing.
I think the folks using F@H have some standard benchmark WUs that they use to test throughput. I've not looked for that for Rosetta.
At present I'm trying different Linux kernels (different task schedulers) to find one with the best balance between CPU and GPU tasks. The standard kernel that ships with Ubuntu provides the best Rosetta throughput but keeps the GPU utilization at about 15% (on GPUGRID.) If I restrict Rosetta to three cores, I get 90%+ GPU utilization but I predict a 25% drop in Rosetta throughput. I've run the RT (real time) kernel for a couple weeks and that keeps the GPU busier but results in a 15% drop in Rosetta RAC. I just got the preempt-bfs kernel working and I'm going to run it for a while to see how it impacts RAC for both SETI/GPU and Rosetta.
I've just switched from GPUGRID to a SETI GPU cruncher. The GPUGRID tended to hold a steady GPU utilization for an entire WU so it was easy to evaluate throughput. The SETI GPU task is all over the place. At the moment it's 14% but I've seen it jump to 80%.
I would expect there to be a difference between Windows and Linux based on OS overhead (e.g. how many CPU cycles are left over after the OS does its work) and how well the CPU does at getting those available cycles to the Rosetta tasks. I suppose things that run in the background such as antivirus would also impact throughput. On my Linux box, a busy application on the desktop can cause X to use significant cycles. Likewise a web page with flash can waste CPU. At the moment Firefox is using 25% of a core and flash an additional 4%.
vBulletin® v3.8.7, Copyright ©2000-2013, vBulletin Solutions, Inc.