• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

CPU's stand still - GPU's move forward

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Kohta

Member
Joined
Jan 24, 2011
Location
Zebulon, North Carolina
Let's take a look at the last 7 or 8 years, back in 2004 - 2005 we started see'ing some of the first factory 3.0-3.2ghz CPU's albeit they we're extremely expensive, they existed to the public. now what about GPU's? The Nvidia GeForce FX 5800 released in 2003 and came with a 500mhz stock core clock.

CPU's today are still coming barely over 3.0ghz flavors, 3.3-3.4ghz finally, but think about it we're still getting stock clocked CPUs barely above the clocks of a Playstation 3 or Xbox 360, sure we have 4/8/12 cores but ultimately it will bottleneck with more than one GPU and in some cases, one GPU. The clock cycles are coming to a point where even the average or casual user needs to overclock their machine to keep from bottle necking during game sessions. A server CPU should tell you everything when it comes to cores and clocks, you can have 100 cores running at 2.8ghz and it will not be as good as a quad core running at 4.5ghz.

GPU's have steadily rose from 500mhz now to 1000mhz and higher, between 1999 and 2002 CPU clocks went up 400mhz, GPU clocks went up 100mhz, between 2003 and 2012, CPU clocks have gained 100-200 mhz and GPUs have gained another 600mhz. We're already having to overclock to get the most out of high end GPUs, what do you think the future holds? Do you think there is a CPU around the corner that will start at 5.0ghz to offset the balance?

2004 high-end Rig with balance
AMD Athlon 64 x2 @ 3.1ghz (Stock)
Nvidia GeForce FX 5800 Ultra @ 500mhz (Stock)
or
Radeon 9800 XT Pro (Stock)

2012 high-end Rig with balance
Intel Core i7 2600k @ 4.5ghz (Overclocked)
AMD Radeon HD 7970 (Stock)
or
Nvidia GTX 580 (Stock)

In most cases i found diminishing returns after overclocking past 4.5ghz in actual game benchmarks with one card, and with 2x GTX 580s i kept gaining fps and higher scores up to 5.1ghz, where i could not go any further without crashing.
 
Last edited:
The issue with your logic is that you're not taking into consideration architectural changes in the chips..

A 2ghz Sandy bridge dual core chip will still probably run rings around an older LGA 775 dual core clocked at ~3ghz.

The same principal applies to GPU's. GTX 560 Ti's and HD 6870's come with higher core-clocks at base than their higher-end counterparts (HD 6970/GTX 580).

But they have lesser cores; perhaps a lower bitrate, etc.

CPU's aren't standing still. Back in the day a CPU would mean just as much as the GPU when trying to play a game.. Nowadays; any decent modern CPU chip will power any GPU fine without having to run at more than like 50%. (For gaming).

In the end; numbers don't really mean a lot.

You can't compare a Pentium 4 at 3ghz HT off to a Sandy Bridge 3ghz (single threaded). It's just not the same architecture. Die shrinks; efficiency changes, this, that, better logic processing.. Different on-chip controllers. It's just entirely different.
 
Don't forget the software either. Back in the P4 days, most programs were single threaded, and could not take advantage of multi-core cpus very well, if at all (some games required that you set the affinity to a single core for best performance and/or just to be able to play it). Now-a-days, most programs can take advantage of multiple cores, though still not that efficiently. Even so, by making use of multiple cores, the program is running more efficiently then it ever could on a single core cpu, even if the multi-core was running as a much slower core speed then in the single core. Core speed really doesn't mean the same it did as back in the P4 days with current generation technology and programming.
 
A server CPU should tell you everything when it comes to cores and clocks, you can have 100 cores running at 2.8ghz and it will not be as good as a quad core running at 4.5ghz.

100 cores * 2.8 ghz clock speed = 280,000,000,000 = 280 billion cycles a second

4 cores * 4.5 ghz clock speed = 18,000,000,000 = 18 billion cycles a second

im sorry but your logic just doesnt hold up.
 
100 cores * 2.8 ghz clock speed = 280,000,000,000 = 280 billion cycles a second

4 cores * 4.5 ghz clock speed = 18,000,000,000 = 18 billion cycles a second

im sorry but your logic just doesnt hold up.

Because the majority of gaming will only use 4-12 cycles (read: cores x clock), like it was mentioned software changed but simply considering everything in a nutshell if you're saying GPU's aren't pulling ahead of mainstream consumer CPU's then you are delusional.

What i said is it doesn't matter if you had a 100 core CPU or a quad core, when it comes to real time rendering on software the faster clock will win every time because the faster clock is dependent on keeping up.

So please, tell me a server CPU is much faster than a Sandybridge when it comes to gaming with a GPU, i dare you.
 
Last edited:
What is this 'cycle' thing u r talking about?

Me? i'm talking about physical cores in the same way he described it, simplified. 4-12 cores, times whatever clock speed you have. it's dangerous to be specific or vague, all depends on who you're talking too, in most cases i assume people can read between the lines. I admittedly stand corrected that software has a lot to do with optimization, that's something i didn't consider in the OP, but even with that said because we don't have faster clocks, and most software is limited to 6 or less threads we have to OC in most cases to gain more FPS or scores in said software.

Ask yourself do you gain more FPS in games when you overclock, yes or no?
 
Last edited:
Never heard cores called cycles...that is why I asked. ;)

Not trying to be standoffish, but so what? As was mentioned before, its obvious clock speeds matter little with the advances in clock for clock imstructions an the additionof more coreS. you can see in scaling eviews that sb performs better than nehalem, which performs better than penryn at the same clocks. That IS advance,ent. Sorry its not clockspeed like many want it to be.
 
Not trying to be standoffish, but so what? As was mentioned before, its obvious clock speeds matter little with the advances in clock for clock imstructions. You can see in scaling eviews that sb performs better than nehalem, which performs better than penryn at the same clocks. That IS advance,ent. Sorry its not clockspeed like many want it to be.

I'm not see'ing you as standoffish, this is all part of the learning experience, if you don't mind elaborating more? when you say clock for clock instructions, wouldn't that mean more clock speed = high performance, it takes an OC now days to catch up to a GPU, there is a certain point where you hit diminishing returns which is where it's safe to say you're on or close to 1:1 - CPU:GPU, out of most any application you won't gain anything from more clock speed.
 
Im mobile so this is pain in the arse to write a lot :(...

Basically what I and others above said, the cpu is doing more each cycle (hz). If you put up the cpus I mentioned h2h more geta done with newer arxhitectjre. So more cores and more ipc (instructions per clock) negate the need for mlre clocks.

Anotner reSon you dont see higher clocks is power consumption and heat output. What nkrmal consumer wants a massive hearsink and louder than a stock fan for........what? Intelisnt making chips for enthusiasts like us. We are the tip of the iceberg.

And last. Gpu clocks have barely went up. Why are those in mhz and not ghz? Under yohr thinking you should be mkre pissed that gpus arent banging on 2ghz. ;)

Look back further to pentium days and 90 mhz.
 
Last edited:
Operating systems also play a large role in utilizing all of the cores of a cpu. The operating system is responsible for load balancing. There are two approaches to load balancing, push migration and pull migration:

In push migration the OS checks the load on each core and evenly distributes the processes onto all the available cores so the wait queue has the lowest possible wait time.

in pull migration the OS is notified if a core is idle and the next item in the wait queue in sent to that core.

So the development of higher core processors and smart coding from the Operating System guys really helps the computers run faster even though the clock speeds are not getting much faster.
 
Operating systems also play a large role in utilizing all of the cores of a cpu. The operating system is responsible for load balancing. There are two approaches to load balancing, push migration and pull migration:

In push migration the OS checks the load on each core and evenly distributes the processes onto all the available cores so the wait queue has the lowest possible wait time.

in pull migration the OS is notified if a core is idle and the next item in the wait queue in sent to that core.

So the development of higher core processors and smart coding from the Operating System guys really helps the computers run faster even though the clock speeds are not getting much faster.

Thanks for that wicked explanation :thup: If Microsoft has so much money at thier disposal why don't they have a better OS for that type of thing? Or atleast a way to figure out how to make it as overhead lite as Linux, i mean, it's obviously possible because i find linux to be a fantastic OS and they would probably target a vast amount of people by selling a linux-type box along side an eye candy box. I understand linux is free but there have been times where i wish things were just a tad more user friendly. Like a .exe instead of having to emulate.
 
Im not sure it really down to the OS its more of a logic thing for the cores. There are different way to schedule tasks for the cpu and we just havent figured out any better ways to schedule the task. I believe the logic is pretty efficient esprecially the round robin scheduling approach. I liked to some slides i found that explain the different types of scheduling approaches if you want to check them out

http://www.google.com/url?sa=t&rct=...3pXKBQ&usg=AFQjCNGoy8DYkISFBqvJq6wXDTNO0FC24Q
 
While clock speeds have not increased as dramatically as they used to CPUs are able to complete more work due to better design. Instead of comparing CPU speed in mhz go look at stuff like mflops and you will see how far a i5 has come compared to a pentium 4 even if the p4 runs at 4ghz and the i5 is only 3 ghz.

And more cores might have more raw processing power but there needs to be work for them to do. If the code is not writen to take advantage of more cores you are not going to see a benefit. Some tasks also can't take advantage of more cores. If you say "why dont they program for more cores?" well it takes more work to get the code to do this and more work is more money and thats not something everyone has.

Microsoft probably has no interest in creating a more efficient os. They seem to try and put more and more features they think people want into there software.
 
I think it was nailed pretty early-on that while CPU clockrates haven't really gone up, IPC and architectural advances have negated a need for extreme clocks, particularly when you're looking at gaming requirements.

Without even considering OS optimizations and multi-core processors, major advancements have occurred on a per-core level. My old Pentium-D 940 @ 3.9ghz managed to run a 1M superpi test at (IIRC) 35-40 seconds, my Q6600 @ 3.5ghz did it in 15, my 920 @ 4ghz did it in 10, and my 2600K @ 4.5ghz runs it in 8 seconds. Despite relatively-minimal changes in clockspeeds, and ignoring multiple cores, single-threaded performance has gone up pretty significantly over the years. Anything from the last couple of generations on the intel side is more than capable for gaming needs for the foreseeable future, and current SB chips (especially when HT is factored in) make a significant difference for video/audio-encoding times.

GPU's have certainly moved forwards, but I don't think it's accurate to say that CPU's are standing still, unless you're still in the Netburst-era more "ghz=better than" mindset, which was actually farcical even at the time.
 
Back