• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Nvidia logic-Processor Cores

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Kohta

Member
Joined
Jan 24, 2011
Location
Zebulon, North Carolina
I broke down how ATi/AMD uses there cores versus NVidia a little while back, but here is what's killing my brain.

I recently got a ZOTAC GTX 560 Ti overclocked to 950/1100/1900 - Running The Witcher 2 with everything maxed out 1920x1080 without Uber Sampling, i'm getting around 45-55 fps, from what iv'e read there are people with GTX 580's and 6970's who are getting nearly the same frame rates, i set out to figure out why, while excluding unoptimization for the moment, the 560 Ti i have has a serious OC going for it with 384 cores, but the 580 has 512 core's, where i'm lost is why my frame rates are only a few less than this card with such a large price margin.

How about another game, Final Fantasy XIV, the 580 struggles with this game just as much as the 560 Ti, about the only leap it has is you can push it up one more notch for AA for the same frame rates at the risk of more dipping, but after 4x AA 1920x1080 there isn't much differance.

Let's look at this on paper for a moment:

GPU
GeForce GTX 580 (Fermi)

Core Clock
772MHz


Shader Clock

1544MHz

Effective Memory Clock
4008MHz

Stream Processors
512 Processor Cores
_________________
GPU
GeForce GTX 560 Ti (Fermi)

Core Clock
950MHz

Shader Clock
1900MHz

Effective Memory Clock
4400MHz

Stream Processors
384 Processor Cores

Breaking this down, the Core clock is faster, meaning more textures and pixels rendered per second over the 580, the memory clock is faster meaning more bandwidth between the GPU, other chipsets and the RAM, the shader clock is higher, meaning each one of those individual cores are running faster than the 580, when it comes to a demanding game that is largely GPU dependent such as The Witcher 2, the card with speed should come out on top since the scene is real time, faster = better.

Let's break it down to what -might- make sense if it were how it really works. 1900mhz x 384 cores = 729,600 total processing power, the 580 would be 1544mhz x 512 cores = 790,528 total processing power, the first flaw in that is that in no circumstance can you ever total all available cores on a CPU or a GPU to get it's total speed, and secondly, if this were the case, then for a $340 difference the 580 would be a complete rip-off meaning it's only a ~8% increase in performance, sadly, that's actually about the increase seen in the game and that game stretches your usage to 99%! leaving me to believe there is truth to that.

The cores do make up the total power, and it also has a lot to do with the amount of total bandwidth available since there is more processing In/Out, the only thing you are going to get out of more core's is more AA and AF which is the only thing largely dependent on memory bandwidth.

The ultimate question is, why spend $300 more on a video card with slower clocks when more core's hardly make up the difference?

I want to step aside to the ATi/AMD field for a moment, The ATi/AMD cards use a completely different instruction system, they use less cores, but these core's are broken down into floating points, and each of these floating points have a certain job, but it still gets done based on the clocks reference, when you break it down this far, it doesn't matter if it's Nvidia or ATi, 010101 is 010101 it doesn't matter what brand you process it on.

5 floating points for every core, assuming we compare the 6870, a $200 card with 1120 streaming processor units, let's remove the floating points and reveal how many cores we have, there are 5 instruction points for every core, so 5 / 1120 = 224 cores, according to this theory the 6870 should be slower than the 560 Ti, in fact, the 6970 should be slower than the 560 Ti, the 6970's total power falls around 4% "slower" using this method, since that's not accurate it makes it difficult to compare ATi/Nvidia or does it?

This is where is get's interesting, The memory clock on the Ati cards are alot higher, giving lead way for more bandwidth where it doesn't have as many cores, which mean's, since the Witcher forces the same AA on all cards the 580 should pull ahead, just a tiny slight margin, the difference of about 5%~8%, but let's back up to that price point, the GTX 580 is a whole $200+.

I'm not writing this to knock one brand or another, but i finally have a game that is GPU intensive, and we're in an era of cards distributed that are all put to it's knee's by this game, i wanted to take the opportunity to understand what exactly makes one better than the other.

The Nvidia core's are powerful, each one can provide a lot of resources quickly, the ATi/AMD series cards are relying on floating points and this both good and bad, there will be a lot of untapped potential in the cards since Nvidia has there fingers in most game developers pie.

With that Raw information about the game provided i have come to the mind boggling question, why spend $200-$300 more for a gain you will never see? I'm looking for more celerity. I'm trying my best to justify why people recommend the GTX 580 so strongly, when a pair of 6870 for $50 difference, pair of 560 Ti's Overclocked for the same price but more power, or even on the extreme end, a 6970, for a tiny bit less performance for a $250 price gap, is worth it? the reason i ask is because i have had several people tell me to send the 560 Ti back and get a GTX 580 instead of putting 2 560 Ti's in Sli.

1st - Nvidia GTX 580 - $500 Avg: 58 fps
2nd - Radeon HD 6970 - $350 Avg: 55 fps
3rd - Nvidia GTX 560 Ti (OC) - $240 Avg: 47 fps
4th - Radeon HD 6870 (OC) - $200 Avg: 45 fps

Thoughts -
Nvidia GTX 560 Ti (OC) SLi - $500 Avg: 70 fps (or better)
 
Last edited:
Thoughts -
Nvidia GTX 560 Ti (OC) SLi - $500 Avg: 70 fps (or better)

it was a good read. but you aren't taking into account things like running into a cpu bottle neck. which could be a reason why a gtx 580 gets the same number as a 560ti. secondly about your I quoted above statement, some games don't scale with more than 1 gpu. so SLI might be a waste. just some constructive criticism.
 
it was a good read. but you aren't taking into account things like running into a cpu bottle neck. which could be a reason why a gtx 580 gets the same number as a 560ti. secondly about your I quoted above statement, some games don't scale with more than 1 gpu. so SLI might be a waste. just some constructive criticism.

Those are actually good points, i didn't consider them, i was thinking 100% GPU scale, and 100% GPU dependent gaming, which is unlikely, it doesn't apply to real world as much but theoretically that's about where it is.
 
Those are actually good points, i didn't consider them, i was thinking 100% GPU scale, and 100% GPU dependent gaming, which is unlikely, it doesn't apply to real world as much but theoretically that's about where it is.

the benchmark Heaven is pretty much 100% gpu. cpu speeds net very little gains. pretty much 0% so you could redo any tests with that and it would help address any possible cpu bottlenecking. as for the multi gpu scaling issue I am unsure how Heaven handles it. hope this helps some more.
 
Back