• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Graphics Cards and RAC

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Sir-Epix

Member
Joined
Jul 4, 2001
Location
Lansing, MI
I was just wondering what the best bang for the buck is in the Graphics cards. I just put in a 8600GTS with 32 stream processing cores in my X2 and that yielded an addition 1,000-1,200 RAC. I was just wondering what the approximate RAC was for a 260 GTX and a 285 GTX.

Would 2 260 > 1 285? I know they are close in price, just wanted to know what would yet the best results? Let me know what you think guys!
 
I'd go with a pair of 260s with 216 shaders instead of a single 285 if you can swing it. GTX2xx cards give the best bang for the buck but cost the most as well. You can still get decent RAC out of the 9800GTX based cards as well for decent money.
 
Definitely go for the dual 216's if those are your only options. Another good choice is some used 9800GX2's that pop up in the classifieds.
 
Well. after looking through those rough numbers you posted and :bang head: for an hour I can't get any type of linear relationship out of it. I tried shaders, core speeds, both, adding in RAM, throwing in a fudge factor for RAM - it's just not linear. :(

Obviously there's more at work there and I would assume it's the core, with later cores being intrinsically more efficient than earlier ones. Makes it hard to predict the value of new cards, though. :-/


If I've got it right the RAC for the cards listed falls roughly like this:
8800GS ~1500 (~$55)
8800GT ~2900 (~$90)
GTX285 ~7000 :eek:

(Different post - same thread)
As a very rough calculation for the 8xxx (and probably 9xxx as well) the pixel shader count * core clock / 4 ±10% gets you in the ballpark for RAC. That's the best I could do with a formula and it's really not very good. The GTX ends up with a 3 divisor instead of 4 - again, a very very rough calculation ...
http://www.ocforums.com/showthread.php?t=602066
That's some info from the "Bang for buck CUDA" thread, which I finally found. No hard data in there about the cards here though.


The RAC calculation is probably near the same as the other GTX cards:
pixel shader count * core clock / 3 (±10%)

Can anyone confirm/deny for these newer cards??? Always looking for info ...! ;)
 
If you are considering the GTX 260's might I recomend the GTX260 OC2 for just a few buck more each they produced about 5500 RAC each on a Pent D I had @ 3.6Ghz Mind you all I did was CUDA on this box.

I am going to pull that card out of that old box and add it to my I7 that has no cuda device as soon as I take time to do it. Seems the OC2 verson of that card was about 30 bucks more than the standard GTX 260

http://www.bfgtech.com/bfgrgtx260896oc2e.aspx
 
Would 2 260 > 1 285? I know they are close in price, just wanted to know what would yet the best results? Let me know what you think guys!

2 - 260 216's = 432 SP's
1 - 285 = 240 SP's

The only way the single 285 could match the 260's output is by overclocking it roughly 80% over stock speeds. And that's not likely.
 
2 - 260 216's = 432 SP's
1 - 285 = 240 SP's

The only way the single 285 could match the 260's output is by overclocking it roughly 80% over stock speeds. And that's not likely.

That's why the 9800GX2 is such a good Bang for the Buck. It has 256 SP's, 128 per core. Overclocks quite well too, if you can take the heat.

QuietIce, SP's are stream processors on the video card.
 
That's why the 9800GX2 is such a good Bang for the Buck. It has 256 SP's, 128 per core. Overclocks quite well too, if you can take the heat.

QuietIce, SP's are stream processors on the video card.


The heat output from the 9800GX2 is what kills it for me. If it's in a single card system, it can be manageable, but throw it in with another card or two, and there is no way to keep it cool. The way it vents it's heat is just too awkward to deal with. I definitely wouldn't consider the GTX285 the best bang for the buck CUDA card for SETI crunching. In my book, the 260 216 is running away with it, if the 275's come down in price a little more, they will become the perfect SETI card... for awhile anyways :) the 275's utilize 240 SP's (Stream Processors;)) as well.
 
!*lightbulb*!

Is that the same as the pixel/shader count ...?

Yes Pixel/shader count is the same thing as Stream Processors.

FYI...I don't think your equation for RAC output is correct, because if I do the math mine would look like this:

32 * 708/3 = 7552 RAC, and I am currently getting abou 1200 RAC.
 
Ah! Then they're not the same thing. According to the nVidia Wikipedia entry the 8600 GTS has only 8 pixel shaders. http://en.wikipedia.org/wiki/Comparison_of_NVIDIA_Graphics_Processing_Units

The equation is slightly different for the 8xxx. The 3 divisor is for the newer GTX cards but the older cards need a 4 divisor (as quoted in post #4). So, your 8600 GTS @ 708 MHz would be: 8 * 708 / 4 = 1416 ±10% = 1274-1558 RAC.

The low end of that is a little over what you're getting but it assumes a dedicated rig running 24/7 with no other programs taking up video time. The 10% variance was added not to indicate a range for a specific card but as a fudge factor for the series because the cards don't scale 1:1 with a change in pixel shaders.

But like I said, it is a very rough calculation. At the time I did that it was better then nothing at all since we had almost no real world data for a guide. Looking at your numbers it seems to hold up, more or less, but I am curious about it being even that far off. Is yours a dedicated rig or is it your daily driver? It wouldn't take much usage per day to put it inside that range, albeit on the low side.
If you are considering the GTX 260's might I recomend the GTX260 OC2 for just a few buck more each they produced about 5500 RAC each on a Pent D I had @ 3.6Ghz Mind you all I did was CUDA on this box.
[snip] [/snip]
http://www.bfgtech.com/bfgrgtx260896oc2e.aspx
BFG GTX 260OC2
28 * 630 / 3 = 5880 ±10% = 5292-6468 RAC for a dedicated rig - that checks.

GTX 285
32 * 648 / 3 = 6912 ±10 = 6221-7603 RAC (and from the looks of things, probably toward the low side of that range).



One thing I've noted in all these RAC discussions (CUDA or not) is that no one seems to take into account the usage factor. If you're using your rig to play games 8 hours a week that's 5% of the time and, depending on the game and BOINC settings, may drop your expected RAC by the same amount. Even light browsing will take up some time, though very little compared to a high-end game. I ran into that issue one weekend while I was doing a lot of video converting from *.avi to *.mpg. I ran IDK 16-20 hours of conversions that weekend (an old Opty rig) and the RAC for that machine dropped over 10% for the week! :eek: Took awhile for me to figure it out but that was the only change in usage I had ...
 
Last edited:
It was dedicated, but lately I have been a hulu addict...and I know my RAC took a hit from that...those numbers are accurate then! For the Gaunlet though I will stop my hulu addiction!

Ah! Then they're not the same thing. According to the nVidia Wikipedia entry the 8600 GTS has only 8 pixel shaders. http://en.wikipedia.org/wiki/Comparison_of_NVIDIA_Graphics_Processing_Units

The equation is slightly different for the 8xxx. The 3 divisor is for the newer GTX cards but the older cards need a 4 divisor (as quoted in post #4). So, your 8600 GTS @ 708 MHz would be: 8 * 708 / 4 = 1416 ±10% = 1274-1558 RAC.

The low end of that is a little over what you're getting but it assumes a dedicated rig running 24/7 with no other programs taking up video time. The 10% variance was added not to indicate a range for a specific card but as a fudge factor for the series because the cards don't scale 1:1 with a change in pixel shaders.

But like I said, it is a very rough calculation. At the time I did that it was better then nothing at all since we had almost no real world data for a guide. Looking at your numbers it seems to hold up, more or less, but I am curious about it being even that far off. Is yours a dedicated rig or is it your daily driver? It wouldn't take much usage per day to put it inside that range, albeit on the low side. BFG GTX 260OC2
28 * 630 / 3 = 5880 ±10% = 5292-6468 RAC for a dedicated rig - that checks.

GTX 285
32 * 648 / 3 = 6912 ±10 = 6221-7603 RAC (and from the looks of things, probably toward the low side of that range).



One thing I've noted in all these RAC discussions (CUDA or not) is that no one seems to take into account the usage factor. If you're using your rig to play games 8 hours a week that's 5% of the time and, depending on the game and BOINC settings, may drop your expected RAC by the same amount. Even light browsing will take up some time, though very little compared to a high-end game. I ran into that issue one weekend while I was doing a lot of video converting from *.avi to *.mpg. I ran IDK 16-20 hours of conversions that weekend (an old Opty rig) and the RAC for that machine dropped over 10% for the week! :eek: Took awhile for me to figure it out but that was the only change in usage I had ...
 
Ah! Then they're not the same thing. According to the nVidia Wikipedia entry the 8600 GTS has only 8 pixel shaders. http://en.wikipedia.org/wiki/Comparison_of_NVIDIA_Graphics_Processing_Units

The equation is slightly different for the 8xxx. The 3 divisor is for the newer GTX cards but the older cards need a 4 divisor (as quoted in post #4). So, your 8600 GTS @ 708 MHz would be: 8 * 708 / 4 = 1416 ±10% = 1274-1558 RAC.

The low end of that is a little over what you're getting but it assumes a dedicated rig running 24/7 with no other programs taking up video time. The 10% variance was added not to indicate a range for a specific card but as a fudge factor for the series because the cards don't scale 1:1 with a change in pixel shaders.

But like I said, it is a very rough calculation. At the time I did that it was better then nothing at all since we had almost no real world data for a guide. Looking at your numbers it seems to hold up, more or less, but I am curious about it being even that far off. Is yours a dedicated rig or is it your daily driver? It wouldn't take much usage per day to put it inside that range, albeit on the low side. BFG GTX 260OC2
28 * 630 / 3 = 5880 ±10% = 5292-6468 RAC for a dedicated rig - that checks.

GTX 285
32 * 648 / 3 = 6912 ±10 = 6221-7603 RAC (and from the looks of things, probably toward the low side of that range).



One thing I've noted in all these RAC discussions (CUDA or not) is that no one seems to take into account the usage factor. If you're using your rig to play games 8 hours a week that's 5% of the time and, depending on the game and BOINC settings, may drop your expected RAC by the same amount. Even light browsing will take up some time, though very little compared to a high-end game. I ran into that issue one weekend while I was doing a lot of video converting from *.avi to *.mpg. I ran IDK 16-20 hours of conversions that weekend (an old Opty rig) and the RAC for that machine dropped over 10% for the week! :eek: Took awhile for me to figure it out but that was the only change in usage I had ...


Unfortunately you're mixing up the meaning of Stream Processor again. More info is available in the video card forum. But here's a brief breakdown.

Video cards used to have pixel shaders (PS), Texture Mapping Units (TMU), Rendering units (ROP). Different games benefited from differing levels of each. Basically the more the better, but the correct balance was important.
Eventually, they added in a vertex shader (VS) to offload some of the calculations, and again some games benefited.

In the wiki link, they list VS:pS:TMU:ROP
However, in the new generation videocards, they did away with individual VS's and PS's. Instead, they made many generic "programmable shader's", that could do the vertex, pixel and geometry calculations. The wiki link calls these unified shaders. But a unified shader = programmable shader = stream processor.

In the wiki link you provided, it states that that the 8600GTS has 32:16:8.
That means it has 32 unified shaders (VS:GS:pS), 16 TMU's, 8 ROP's.

The unified shader is what's used to do the calculations in SETI so you need to rework your formula to account for them and not the ROP's that you're currently using.

For example. The 9600GSO has 96:48:16 and the 9800GT has 112:56:16 and the 9800GTX has 128:64:16. They all have the same ROP's (16), but the performance goes up with the number of unified shaders.
 
Last edited:
I'd love to narrow it down more. Unfortunately I still have the original limitation of the first thread started about this - lack of empirical data to use as a baseline. Until there's more information available to compare it's useless to even talk about changing the formula.

You listed three cards there and stated the performance goes up with the number of unified shaders. What are the average RAC's for those cards in a dedicated rig ...???
 
You listed three cards there and stated the performance goes up with the number of unified shaders. What are the average RAC's for those cards in a dedicated rig ...???

No need to reinvent the wheel here. The GPU Folding guys have databases for their Points per day.

9600GSO = ~3000
9800GT = ~4700
9800GTX = ~5600

Of course the clock speeds and core architecture play a role too.

If we accept that the 9800GT stock clocked it ~3000 RAC, then we can list the other cards above or below that relatively speaking.

Reference. http://sites.google.com/site/techtdsn/folding-home/f-h-gpu-ppd-database
 
Back