• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

CPUs soon a legacy? If so how comes?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Ivy

Member
Joined
Jun 7, 2011
USER B:
I mean, a CPU could do more than that but Nvidia kinda will disable it on purpose. They are telling us that actually as good as any PhysX is impossible on CPU, and we could get a 1 million core supercomputer and they will still say the same. People should stay tuned for future CPUs, the Ivy bridge with new transistors and what else... CPUs arnt sleeping. Its however true that on a few games the physX still may own, but its truly just a hand full and for the majority barely worth it, especially when they have no interest in such games (i am RPG gamer so i got little interest into Shooter). Aswell i am not sure how effective the PhysX programming truly is executed, but one thing im sure about: CPU can do more than what they currently do, thats why CPU have almost no impact on games anymore. Only theyr clocks truly matters, architecture is almost wasted, means that most parts of CPU isnt used. Im sure that Intel will build some true monster CPUs while AMD will build a monster Radeon i guess... it kinda looks like. Strong CPUs are currently underused for most games... a overkill to even own them. Any other program will get more gain than a game.

Im not truly a fan of whatelse but at the current condition i rather support AMDs view, because CPU/GPU was always a team in the past and Nvidia slowly is trying to break a highly efficient and powerful team in order to make theyr GPU look superior. But no matter what, they have to work together for a shared standart which can easely be implemented by the devs and which does use CPUs more effective, its not a useless part.

Performance wise in scientific manner, a current SB flagships can handle about 120 GFLOP in double precision, a 7970 will handle around 950 GFLOP in double precision (the current stongest single). However, GPUs used for PhysX arnt usualy high end, and if we try to run it on a single GPU, then the GPU will break down by ~15-30%**, because thats the load the CPU is able to take away. (**Much higher than 15% when GPU is weaker than current Radeon flagship). I get the feeling that it will increase in near future, because its Intel. As long as physics can be run on roughly 100 GFLOPs then it may run without GPU, hard logics and the GPU got more power for rendering. But even more powerful is to share the physics so the CPU is fully utilized and the rest can still be taken over by the GPU, there simply will be a slider which we can adjust how much physics load we want to hand over to CPU, 1 to 100% (in term the CPU is overused it will simply destroy performance, kinda same such as on a overused GPU). Ofc the CPU still got the advantage to be the "jack of all trades" while a GPU is always very hard to adapt to something else, so main focus have to be on how to get the 2 GPUs (Radeon/Geforce) to a shared standart on such terms. Master software is still not on this planet, its somewhere on a unknown planet.

Why a CPU isnt stronger than that? Huge amount of transistors are used for cache (billions!). But we soon come to a point where we dont need so many of it anymore and can instead increase raw computing performance with. But well, a 8 core Ivy Bridge having 3D transistors, i wonder its computing performance. Prehaps 150-200 GFLOP. Some might get dual CPU board, 300-400 GFLOP? Especially for such people a engine which can hand over dutys to CPU is critical. Who knows, but its not weak and its much more adaptive to whatelse. But they will take theyr time and slowly walk up the ladder, and why? Because they can, no competition.


USER B:
I still wonder why CPUs nowadays simply doesnt push games (in any serious manner) anymore and whats the true issue behind. Still not sure, but CPUs surely arnt fully utilized, else i would see another kind of load. They did affect the games much harder up to the C2D architecture but since Nehalem and SB are out, its over... Now, does a Nehalem and SB only truly beat at implementing new instruction sets but otherwhise barely more powerful than its forerunners? Good question. Im soon truly finish getting powerful CPUs because it just doesnt benefit gamers anymore. And GPU, well, thats usualy wholely based on details, so there is no general answer to this. At 720P with lowest settings almost any GPU can hit like a truck (thats why consoles can run games at all, theyr GPUs are a joke).

But the biggest joke is, and i had that in some games, that either the CPU nor the GPU is fully utilized, like 40% load, both of them... and bad FPS still, so what to say? Where is the issue?


USER A:
The CPU is utilized more at lower resolutions where the GPU is more stressed at higher resolutions.

USER B:
Actually even if the CPU is utilized more at lower resolution, the testers nowadays have a very hard time to even "push" the CPU toward its limits. And in many cases it makes close to no difference such as you can see here: http://www.hardwaresecrets.com/artic...Review/1429/16
ah here it is, its 1080P, but, ANY detail is fully disabled, no AA no AF. Even the FX 8150 which is considered "bad" will almost keep track with the 1k $ SB-E. We play it on zero detail, so if we turn it up the CPU impact goes down to zero, from close to zero to true zero. You would have to use SD resolution which is absolutly not real, no one using it, it just makes no sense. So either, whats Intel doing, and what are the devs doing, where is the issue?!

The only reason for me to get a power CPU is to lower the TDP. Because lets say a SB-E runs a game with 30% load, while a weaker CPU need 60% load, even if the weaker CPU is same, or lower TDP, its not necessarely lesser heat. In many terms the power CPU which got close to no load, will run colder than a weaker one which is at high load. Heat is heavely dependable on load, no exclusion. Aswell a big CPU got more die to dissipate heat to the sink, i had my results and other testers with the same system had temps much worse than mine. Difference between full idle and full load is up to 40° (on rather weak coolers).



USER A:
At 1080p a good videocard will under most circumstances be the primary driver of the graphics at that resolution. If you dropped down to 1280x1024 or even smaller the task is handed off to the CPU far more, thus making it at much more cpu-dependent resolution compared to that of a higher one (1080p or larger). Which is evident in how CPU-dependent a benchmark like 3dMark01 and Aquamark are as they are older (although 3dM03 is actually pretty GPU based, where 05 goes back to heavy CPU).


USER B:
720P without details, i can aswell just go play on a console at that point, it even got more details than "lowest". Ah yeah, i can play any classic games at 100 FPS (Crysis is still not Classic yet) without CPU being overutilized. I usualy cap at 60 FPS because above makes no sense, what you do is to run games at a unrealistic FPS amount, thats why CPU utilized so high.

Note: This was partially a take over from other topics because it was going to much into a unrelated theme for the given topic.
 
For reference, I am UserA on this and you are UserB, not sure why you didn't use the names in your quotes.

I'm not really sure what you are looking for. The game/etc is only going to use as much CPU as it needs to for whatever task it is doing. If you have a dedicated sound card, a dedicated NIC, a dedicated video card, etc then the CPU isn't left with a ton of things to do (other than running the processes, background processes, and Physics/AI/other calculations that aren't currently being done by the GPU, which is dependent on the game/engine in question).
 
1. Because i was to lazy to copy names but im super fast at writting.
2. I didnt ask you before, so i kinda didnt want to use youre name (when i quote others i usualy ask, although its in the same forum).

Ah yes? So it means we (the high end users) have to suffer for all those non dedicated systems out there? I mean, its same such as losing efficiency, dedicated systems arnt fully utilized anymore. Well i kinda should keep it easy because my CPU is running that cold as never before. ;)
 
Last edited:
Ivy - Sorry, but I can't follow. Your English is confusing.

Janus - If you understand , can you restate?
 
If I understand correctly, Ivy is having low FPS issues yet only seeing his CPU utilization sit around 30-40% thus having 'wasted potential' to not be able to achieve higher FPS when parts are going unused.
 
If I understand correctly, Ivy is having low FPS issues yet only seeing his CPU utilization sit around 30-40% thus having 'wasted potential' to not be able to achieve higher FPS when parts are going unused.
I see. Low FPS makes me think that either the GPU doesn't have the bandwidth it needs (PCIe x4 vs x8 vs 16x) or the GPU is just not powerful enough.

I do agree that with todays powerful multi core CPUs a lot more could be done to get them to share the work. That comes down to the game code which sadly is often written for an older CPU and may not take advantage of multiple cores.

BF3 for example, uses all four cores but I think only up to 70%. But I need to verify that.
 
Last edited:
I understand myself good and others too, i dunno... thats where i cant follow ;)
The only true incapability is that ultimately i feel incapable to come to the final result regarding that matter. But in long term it will hurt those having non dedicated stuff aswell because the load will be handed over to much to the GPU and they always have weak GPUs.

I do not have bad FPS, i do want to understand why CPUs are not utilized proper anymore, thus resulting in bad efficiency when there is unused CPU power. Did you even read all my stuff? XD :D

x16 lanes i had, its still not limiting at all. But its not like it does need any lanes at all when CPU isnt used. The GPU works internaly in such terms.
 
Last edited:
I understand myself good and others too, i dunno... thats where i cant follow ;)
The only true incapability is that ultimately i feel incapable to come to the final result regarding that matter. But in long term it will hurt those having non dedicated stuff aswell because the load will be handed over to much to the GPU and they always have weak GPUs.

I do not have bad FPS, i do want to understand why CPUs are not utilized proper anymore, thus resulting in bad efficiency when there is unused CPU power. Did you even read all my stuff? XD :D

x16 lanes i had, its still not limiting at all. But its not like it does need any lanes at all when CPU isnt used. The GPU works internaly in such terms.
I did read your post but no offence, I had a hard time following you. In English verb tense and conjunctions give sentences completely different meanings. Some of what you wrote makes no sense in English so I had to guess at what you meant to say versus what you wrote. This is common with the conversion from a latin/romance based languge to English. We say things like "red car" where it's "car red" as an example. I've seen this in asian phrase structure also. That's why your friends would understand you because they most likely have experience with your way of writing and/or speak another language that structures phrases as I mentioned. I think you would be best served by keeping your points simpler and asking as direct of questions as possible. I don't mean this as an insult to you. I am just trying to help you understand why I was confused by what you wrote.

Back on topic. When I play BF3 my "first" core is about 80% used with the other three being about 50% used. That leaves a lot of computing power unused. Why that is the case is probably from a couple of things. One is the coding of the game. I am not sure how much can be rewritten for games to use all the available CPU cores because I am not sure how many things can be done in parallel in game code. I think that this is the crux of your point. A fully prallelized code runs in separate cores on the CPU. Older games only used one core but they are getting better.

I think part of the answer is also in why CPUs have multiple cores in the first place. Years ago Intel had single core CPUs that we getting faster but eventually hit a limit on speed. When that happened they decided to add a second core to share the workload. In this way a dual core CPU could be "faster" without actually getting a faster clock speed. This multicore idea has evolved into the CPUs you see today.

Bottom line, I think the reason for the CPU's not being utilized is really because hardware has evolved in a different way than software. I think games software(code) will get there eventually as less and less single or even dual core CPUs are around.

As far as the bus width for PCIe, that is a limitation on the communication between the CPU and GPU. CPU and GPU bandwidth are much higher than can be transferred on any existing bus. Hardware manufacturers know this so they tend to not rely on the connection to keep speeds internally up because that's all they can do. In a perfect world you would have a CPU and GPU on the same die so they could talk directly. If that could be done you could actually have slower versions of both and get the same speed. This is kind of like the idea of bus speed limiting speed also. If as much effort had been spent on bus speed for IO as it is for CPU and GPU speed we would have much faster computers because access to all resources would be faster. No one brags about how fast their bus speed is but they do their CPU/GPU. There are just a lot of bottle necks right now. This is why UNIX and other enterprise level servers are still used. They have specially designed busses and peripherals that keep IO, memory, and disk access very high. You need that in enterprise level applicaitons and databases. Of course the software they run is also optimized much more so to the hardware than games are.

I guess there is no simple answer to your question but I do think that it is an important one because it shows the areas where improvements need to be made. The more we insist on a faster overall PC and not just a 7GHz CPU/GPU the better all types of computer applications will run. Games first on the list of course! :cool:
 
Last edited:
1. Because i was to lazy to copy names but im super fast at writting.
2. I didnt ask you before, so i kinda didnt want to use youre name (when i quote others i usualy ask, although its in the same forum).

Ah yes? So it means we (the high end users) have to suffer for all those non dedicated systems out there? I mean, its same such as losing efficiency, dedicated systems arnt fully utilized anymore. Well i kinda should keep it easy because my CPU is running that cold as never before. ;)
Suffer? Why do you believe something has to run at 100% to be efficient (if thats the argument, I dont understand either...)

@ Owenator - +1
 
I did read your post but no offence, I had a hard time following you. In English verb tense and conjunctions give sentences completely different meanings. Some of what you wrote makes no sense in English so I had to guess at what you meant to say versus what you wrote. This is common with the conversion from a latin/romance based languge to English. We say things like "red car" where it's "car red" as an example. I've seen this in asian phrase structure also. That's why your friends would understand you because they most likely have experience with your way of writing and/or speak another language that structures phrases as I mentioned. I think you would be best served by keeping your points simpler and asking as direct of questions as possible. I don't mean this as an insult to you. I am just trying to help you understand why I was confused by what you wrote.

Indeed. Not very concise in explaining.
 
I did read your post but no offence, I had a hard time following you. In English verb tense and conjunctions give sentences completely different meanings. Some of what you wrote makes no sense in English.

+1 :thup: Beautifully worded. And I agree 100%.

Also Ivy you just aren't understanding the way arcitechtures work these days. The CPU is still a critical component of the system. No game will hit 100% on your CPU all the time, nor would you want it to. Some games are more CPU or GPU intensive. FSX from 2006 still eats more CPU than any other game I own BF3 included. Still nothing actually hits 100% load it simply isn't possible unless you're stuffing the CPU pipeline constantly with something that is designed to hold 100% load like Prime 95 blend or burntest.

When you hit 100% of something, in real world use, which again as explained above is fairly close to impossible, that would be a bottleneck anyways, so why would you want to?

In a game system you can get closer to 100% usage of everything with an intensive game because you have a specific hardware target. PCs aren't like that. Your game must run on dozens of different platforms, cpus, and gpus, with different memory controllers where the whole system talks to components differently. On any given system, program X may only get 80% out of the CPU. 80% of a Core 2 duo is < than 80% from a 2600K and so forth. That's how it works. The architecture of, say, P35, vs X79 is complete apples to oranges, but a program must be written to run on either. In one case you're getting to and from the RAM seven or eight times faster with as many times more bandwidth. There are infinite variables. No exact hardware target, nowhere near 100%. What things like Prime 95 do is fill the cpu pipeline with constant instructions back to back to keep it at 100% in a manner which is unrealistic if not impossible for real world comparison.


But its not like it does need any lanes at all when CPU isnt used. The GPU works internaly in such terms.

Bad English or not, that simply is absolutely not true.. The GPU receives all data via it's PCIE link to the CPU (intel) or Northbridge (AMD, older Intel). This isn't me being rude, its just me telling you that you don't understand what's going on and are making false arguments. :)... With really confusing English that I don't think anybody here is able to understand.

The only true incapability is that ultimately i feel incapable to come to the final result regarding that matter. But in long term it will hurt those having non dedicated stuff aswell because the load will be handed over to much to the GPU and they always have weak GPUs.

Both of those sentences make absolutely no sense whatsoever. In the first one I understand that you're trying to indicate that you are unable to reach a conclusion regarding this matter. The second sentence makes no sense at all.
 
Last edited:
It just means that you use the ressources, if its to hard to understand i will get a translator. They will use the correct format to make you people understand because not everyone is able to adapt, but thats usualy the weakness of a computer program, not the weakness of a human. I am unable to provide the correct format, but i am able to translate your format... any given time, because i know the meaning of it. The yellow sentence is actually so easy to understand that it makes me confused, if anyone is able to understand it i wish they make a 1A US english translation so i can see its weak spot. I cant make it any better than that because its that easy... its hilarious. I would need a teacher who can tell me the issue. Ah yes, "car red" is not typically used in the grammar i know but i can understand it, its no problem. But tbh, some people... i can communicate better with them only by hands, than in here using words. Communication is heavely dependand on willpower to understand and heart. Those who do not want to understand will never understand and traditionally some people got low willpower to adapt to others because they are the envy of the world. Anyway, its a tech forum here so i run to much OT with.

+]

Both of those sentences make absolutely no sense whatsoever. In the first one I understand that you're trying to indicate that you are unable to reach a conclusion regarding this matter. The second sentence makes no sense at all.


The GPU is using a own frame buffer (up to advanced rendering data such as textures) for any excessive load draw. The main RAM is close to no bandwith and is having close to no influence at frames or the engine itself (it could be used to remove load time, but finally still HDD is the golden donkey for). Most important by far is a strong processor, that was always like that and will continue to work like that. The PCIE x16 is NOT limiting at like 99% of all game, i still keep that view. thanks. I did not ask about lanes or RAMs and all that weird stuff, i asked about CPU.. that is what the whole topic is about.

I do NOT say that a better I/O cant speed things up but the issue is that all the combined solutions are not powerful, and the dedicated solutions do lack a strong I/O so its barely used at all (thus, resulting into close to no difference vs. faster I/O) and a CPU which is kinda separated and without load. Its not having the same need for I/O such as a excessively used server CPU.

The I/O could speed things up yes, but there is no focus providing a fast one and who is responsible? Yes, Intel (they provide chipsets, almost any tech regarding nowadays home PCs, theyr CPUs/chipsets are used almost everywhere) ..., so they dont want us to have fast dedicated solutions. In theory the frequency based MBs are outdated already, we will need to implement boards communicating with light speed. Frequency is already outdated tech... we see it happening on internet transfer but the boards are still "old school", and very ineffective, not any high tech at all.

Also, when someone got endless RAMs it makes no sense to always punish a drive, it aswell makes no sense to stomp down the GPU while the CPU is close to idle. A server is using RAM in HUGE amount which is common sense because the RAM is much faser than a drive. Only the initial data, in order to start up a server is actually stored on a HDD, the rest of it... is a RAM drive!

We see how it works on the servers, so yes, the home PCs, in fact, are stone age but we still support it with heart and soul and do feel supreme when we got a new whoahhh... what a powerful... board. :D

But the biggest sinners are the devs because not using RAM, not using CPU, at the given potency/efficiency, so even those on Ihpone can have a fun time. Although, soon a Iphone got more hightech than a PC, face the hard truth.
 
Last edited:
After reading all this, I still don't see the problem unless you want to see full cpu utilization for some odd reason.

If you want your cpu to gobble up every cycle, try running Prime95 or Folding@home.
The whole reason for these massive GPU's is to offload a lot of the calculations from the CPU, so the CPU can do other things while you are gaming, like downloading the latest updates from Microsoft, or running a spreadsheet.
All I can say is if you don't like your system not using all those "extras" you can always donate your rig to me, and I will gladly give you my PIII which I guarantee will choke and gag on the latest games and the cpu will be at 100% load almost all the time.

I think your biggest problem is that you want to overclock like crazy, but you do not see any benefit from the extra speed?
Older systems DID benefit from making the bus faster like the days of the PIII and before, because the bus speed was the only way to increase the clock speed of the cpu, which increased the performance of the whole rig...but most of today's rigs are "bus locked" meaning that the ram and PCI-E run at their defaults while the cpu bus is the only thing being overclocked. That I believe is where the difference lies.

Hope this makes more sense than the OP :screwy:
 
It just means that you use the ressources, if its to hard to understand i will get a translator. They will use the correct format to make you people understand because not everyone is able to adapt, but thats usualy the weakness of a computer program, not the weakness of a human.

You may want a translator. Honestly. We're not getting what you're saying.

I just don't follow what that means.

But tbh, some people... i can communicate better with them only by hands, than in here using words.
In English that almost has sexual connotations to it. How would you communicate a CPU/GPU bottleneck issue with your hands? This is why we are so completely confused at your wording. Please get a translator if this is so important to you. I am not trying to be rude to you. Nobody here is understanding your English and it's not our fault. I feel you're trying to blame us somehow for our inability to understand you. We'd like to understand you but it just isn't coming across clearly. It's not that we don't want to have this conversation with you. We just can't communicate clearly with each other. I know you can understand me, but I can not understand you.

:)
 
Last edited:
CPUs utilization is not easy to calculate.
some operations can be done parallel on the same core for example (if the operations don't share the same circuitry inside the core)

parallel programming is a pain in the ...
Solutions used to implement parallel programming introduce overhead in the code so scaling will never be a multiple of the threads used.

Since "good" parallel programming is difficult to achieve most game companies write their engines with simple parallel code. which generally is -> run update on 1 core, sounds on another and e.g networking on another (i am keeping it simple here because it is not really like i said)

Trying to optimize code to run parallel costs too much in terms of debugging time, whatever works single threaded on 1 core it will most probably run on any other.

but these days we are stuck with a deadlock happening in our engine that happens only on core 2 duo which happens in our multi-threaded updater.

In such cases it is really tough to find the bug.

this said, you will never see 100% cpu utilization but generally is one thread that bottlenecks the others.
I could keep explaining for hours, the question is not easy to answer and so is not programming. high cpu utiliazation (near 100%) can be achieved by using a multithreaded program where each thread is independent from the other and there is no bottleneck of any sort.

oh and for THEOCNOOB you should see an italian gesturing CPU bottleneck :p
it is not something you do actively, but more passively when you speak, it just . . . happens
 
Thanks for game code the parallization explanation! That makes sense.

Sounds like we don't really need multi-cores from a gaming point of view because we still need higher single thread performance for faster games, right?

That makes me wonder if we had faster monolithic single core CPUs with more built in instructions (CISC is the term I beleive) if we'd have better game performance which is just the opposite of the direction CPUs are going. GPUs still have a single core CISC architecture so maybe the OP is onto something. If the GPU can do all the "work" of gaming versus the CPU. I wonder if you could do that: code games to run without CPU?
 
100% CPU utilization would be bad as anything going on in the background would cause stutter or lag in games.

That said, seeing both low CPU and GPU percentages in games, just means you do not need to buy new hardware I guess :) Might as well spend more on software :)
 
Increased efficiency and improved instruction sets are used to "reduce" CPU load, not add to it. Just because you're not seeing 100% CPU utilization does not mean that something is poorly coded or inefficient. It simply means that the CPU is handling the data it needs too, efficiently.
 
Increased efficiency and improved instruction sets are used to "reduce" CPU load, not add to it. Just because you're not seeing 100% CPU utilization does not mean that something is poorly coded or inefficient. It simply means that the CPU is handling the data it needs too, efficiently.

This.

Microsoft has been working hard with DirectX to take the CPU out of the GPU's pathway. The CPU used to be the bottleneck during DX9 era, and it was in part due to the fact that the CPU handled everything on the computer, including tasks meant solely for the GPU. It was a huge overhead in DX9, and finally by DX11 we get to see some progress in that department.

Its a good thing.

CPUs are here to stay, there is no way around having a main brain in a computer. If anything, GPU's are the ones that are in danger. Not really yet, but at some point CPU's will once again regain GPU capabilities; at least thats what has been on PC news websites over the last 5 years or so.
 
Back