- Joined
- Jun 7, 2011
USER B:
I mean, a CPU could do more than that but Nvidia kinda will disable it on purpose. They are telling us that actually as good as any PhysX is impossible on CPU, and we could get a 1 million core supercomputer and they will still say the same. People should stay tuned for future CPUs, the Ivy bridge with new transistors and what else... CPUs arnt sleeping. Its however true that on a few games the physX still may own, but its truly just a hand full and for the majority barely worth it, especially when they have no interest in such games (i am RPG gamer so i got little interest into Shooter). Aswell i am not sure how effective the PhysX programming truly is executed, but one thing im sure about: CPU can do more than what they currently do, thats why CPU have almost no impact on games anymore. Only theyr clocks truly matters, architecture is almost wasted, means that most parts of CPU isnt used. Im sure that Intel will build some true monster CPUs while AMD will build a monster Radeon i guess... it kinda looks like. Strong CPUs are currently underused for most games... a overkill to even own them. Any other program will get more gain than a game.
Im not truly a fan of whatelse but at the current condition i rather support AMDs view, because CPU/GPU was always a team in the past and Nvidia slowly is trying to break a highly efficient and powerful team in order to make theyr GPU look superior. But no matter what, they have to work together for a shared standart which can easely be implemented by the devs and which does use CPUs more effective, its not a useless part.
Performance wise in scientific manner, a current SB flagships can handle about 120 GFLOP in double precision, a 7970 will handle around 950 GFLOP in double precision (the current stongest single). However, GPUs used for PhysX arnt usualy high end, and if we try to run it on a single GPU, then the GPU will break down by ~15-30%**, because thats the load the CPU is able to take away. (**Much higher than 15% when GPU is weaker than current Radeon flagship). I get the feeling that it will increase in near future, because its Intel. As long as physics can be run on roughly 100 GFLOPs then it may run without GPU, hard logics and the GPU got more power for rendering. But even more powerful is to share the physics so the CPU is fully utilized and the rest can still be taken over by the GPU, there simply will be a slider which we can adjust how much physics load we want to hand over to CPU, 1 to 100% (in term the CPU is overused it will simply destroy performance, kinda same such as on a overused GPU). Ofc the CPU still got the advantage to be the "jack of all trades" while a GPU is always very hard to adapt to something else, so main focus have to be on how to get the 2 GPUs (Radeon/Geforce) to a shared standart on such terms. Master software is still not on this planet, its somewhere on a unknown planet.
Why a CPU isnt stronger than that? Huge amount of transistors are used for cache (billions!). But we soon come to a point where we dont need so many of it anymore and can instead increase raw computing performance with. But well, a 8 core Ivy Bridge having 3D transistors, i wonder its computing performance. Prehaps 150-200 GFLOP. Some might get dual CPU board, 300-400 GFLOP? Especially for such people a engine which can hand over dutys to CPU is critical. Who knows, but its not weak and its much more adaptive to whatelse. But they will take theyr time and slowly walk up the ladder, and why? Because they can, no competition.
USER B:
I still wonder why CPUs nowadays simply doesnt push games (in any serious manner) anymore and whats the true issue behind. Still not sure, but CPUs surely arnt fully utilized, else i would see another kind of load. They did affect the games much harder up to the C2D architecture but since Nehalem and SB are out, its over... Now, does a Nehalem and SB only truly beat at implementing new instruction sets but otherwhise barely more powerful than its forerunners? Good question. Im soon truly finish getting powerful CPUs because it just doesnt benefit gamers anymore. And GPU, well, thats usualy wholely based on details, so there is no general answer to this. At 720P with lowest settings almost any GPU can hit like a truck (thats why consoles can run games at all, theyr GPUs are a joke).
But the biggest joke is, and i had that in some games, that either the CPU nor the GPU is fully utilized, like 40% load, both of them... and bad FPS still, so what to say? Where is the issue?
USER A:
The CPU is utilized more at lower resolutions where the GPU is more stressed at higher resolutions.
USER B:
Actually even if the CPU is utilized more at lower resolution, the testers nowadays have a very hard time to even "push" the CPU toward its limits. And in many cases it makes close to no difference such as you can see here: http://www.hardwaresecrets.com/artic...Review/1429/16
ah here it is, its 1080P, but, ANY detail is fully disabled, no AA no AF. Even the FX 8150 which is considered "bad" will almost keep track with the 1k $ SB-E. We play it on zero detail, so if we turn it up the CPU impact goes down to zero, from close to zero to true zero. You would have to use SD resolution which is absolutly not real, no one using it, it just makes no sense. So either, whats Intel doing, and what are the devs doing, where is the issue?!
The only reason for me to get a power CPU is to lower the TDP. Because lets say a SB-E runs a game with 30% load, while a weaker CPU need 60% load, even if the weaker CPU is same, or lower TDP, its not necessarely lesser heat. In many terms the power CPU which got close to no load, will run colder than a weaker one which is at high load. Heat is heavely dependable on load, no exclusion. Aswell a big CPU got more die to dissipate heat to the sink, i had my results and other testers with the same system had temps much worse than mine. Difference between full idle and full load is up to 40° (on rather weak coolers).
USER A:
At 1080p a good videocard will under most circumstances be the primary driver of the graphics at that resolution. If you dropped down to 1280x1024 or even smaller the task is handed off to the CPU far more, thus making it at much more cpu-dependent resolution compared to that of a higher one (1080p or larger). Which is evident in how CPU-dependent a benchmark like 3dMark01 and Aquamark are as they are older (although 3dM03 is actually pretty GPU based, where 05 goes back to heavy CPU).
USER B:
720P without details, i can aswell just go play on a console at that point, it even got more details than "lowest". Ah yeah, i can play any classic games at 100 FPS (Crysis is still not Classic yet) without CPU being overutilized. I usualy cap at 60 FPS because above makes no sense, what you do is to run games at a unrealistic FPS amount, thats why CPU utilized so high.
Note: This was partially a take over from other topics because it was going to much into a unrelated theme for the given topic.
I mean, a CPU could do more than that but Nvidia kinda will disable it on purpose. They are telling us that actually as good as any PhysX is impossible on CPU, and we could get a 1 million core supercomputer and they will still say the same. People should stay tuned for future CPUs, the Ivy bridge with new transistors and what else... CPUs arnt sleeping. Its however true that on a few games the physX still may own, but its truly just a hand full and for the majority barely worth it, especially when they have no interest in such games (i am RPG gamer so i got little interest into Shooter). Aswell i am not sure how effective the PhysX programming truly is executed, but one thing im sure about: CPU can do more than what they currently do, thats why CPU have almost no impact on games anymore. Only theyr clocks truly matters, architecture is almost wasted, means that most parts of CPU isnt used. Im sure that Intel will build some true monster CPUs while AMD will build a monster Radeon i guess... it kinda looks like. Strong CPUs are currently underused for most games... a overkill to even own them. Any other program will get more gain than a game.
Im not truly a fan of whatelse but at the current condition i rather support AMDs view, because CPU/GPU was always a team in the past and Nvidia slowly is trying to break a highly efficient and powerful team in order to make theyr GPU look superior. But no matter what, they have to work together for a shared standart which can easely be implemented by the devs and which does use CPUs more effective, its not a useless part.
Performance wise in scientific manner, a current SB flagships can handle about 120 GFLOP in double precision, a 7970 will handle around 950 GFLOP in double precision (the current stongest single). However, GPUs used for PhysX arnt usualy high end, and if we try to run it on a single GPU, then the GPU will break down by ~15-30%**, because thats the load the CPU is able to take away. (**Much higher than 15% when GPU is weaker than current Radeon flagship). I get the feeling that it will increase in near future, because its Intel. As long as physics can be run on roughly 100 GFLOPs then it may run without GPU, hard logics and the GPU got more power for rendering. But even more powerful is to share the physics so the CPU is fully utilized and the rest can still be taken over by the GPU, there simply will be a slider which we can adjust how much physics load we want to hand over to CPU, 1 to 100% (in term the CPU is overused it will simply destroy performance, kinda same such as on a overused GPU). Ofc the CPU still got the advantage to be the "jack of all trades" while a GPU is always very hard to adapt to something else, so main focus have to be on how to get the 2 GPUs (Radeon/Geforce) to a shared standart on such terms. Master software is still not on this planet, its somewhere on a unknown planet.
Why a CPU isnt stronger than that? Huge amount of transistors are used for cache (billions!). But we soon come to a point where we dont need so many of it anymore and can instead increase raw computing performance with. But well, a 8 core Ivy Bridge having 3D transistors, i wonder its computing performance. Prehaps 150-200 GFLOP. Some might get dual CPU board, 300-400 GFLOP? Especially for such people a engine which can hand over dutys to CPU is critical. Who knows, but its not weak and its much more adaptive to whatelse. But they will take theyr time and slowly walk up the ladder, and why? Because they can, no competition.
USER B:
I still wonder why CPUs nowadays simply doesnt push games (in any serious manner) anymore and whats the true issue behind. Still not sure, but CPUs surely arnt fully utilized, else i would see another kind of load. They did affect the games much harder up to the C2D architecture but since Nehalem and SB are out, its over... Now, does a Nehalem and SB only truly beat at implementing new instruction sets but otherwhise barely more powerful than its forerunners? Good question. Im soon truly finish getting powerful CPUs because it just doesnt benefit gamers anymore. And GPU, well, thats usualy wholely based on details, so there is no general answer to this. At 720P with lowest settings almost any GPU can hit like a truck (thats why consoles can run games at all, theyr GPUs are a joke).
But the biggest joke is, and i had that in some games, that either the CPU nor the GPU is fully utilized, like 40% load, both of them... and bad FPS still, so what to say? Where is the issue?
USER A:
The CPU is utilized more at lower resolutions where the GPU is more stressed at higher resolutions.
USER B:
Actually even if the CPU is utilized more at lower resolution, the testers nowadays have a very hard time to even "push" the CPU toward its limits. And in many cases it makes close to no difference such as you can see here: http://www.hardwaresecrets.com/artic...Review/1429/16
ah here it is, its 1080P, but, ANY detail is fully disabled, no AA no AF. Even the FX 8150 which is considered "bad" will almost keep track with the 1k $ SB-E. We play it on zero detail, so if we turn it up the CPU impact goes down to zero, from close to zero to true zero. You would have to use SD resolution which is absolutly not real, no one using it, it just makes no sense. So either, whats Intel doing, and what are the devs doing, where is the issue?!
The only reason for me to get a power CPU is to lower the TDP. Because lets say a SB-E runs a game with 30% load, while a weaker CPU need 60% load, even if the weaker CPU is same, or lower TDP, its not necessarely lesser heat. In many terms the power CPU which got close to no load, will run colder than a weaker one which is at high load. Heat is heavely dependable on load, no exclusion. Aswell a big CPU got more die to dissipate heat to the sink, i had my results and other testers with the same system had temps much worse than mine. Difference between full idle and full load is up to 40° (on rather weak coolers).
USER A:
At 1080p a good videocard will under most circumstances be the primary driver of the graphics at that resolution. If you dropped down to 1280x1024 or even smaller the task is handed off to the CPU far more, thus making it at much more cpu-dependent resolution compared to that of a higher one (1080p or larger). Which is evident in how CPU-dependent a benchmark like 3dMark01 and Aquamark are as they are older (although 3dM03 is actually pretty GPU based, where 05 goes back to heavy CPU).
USER B:
720P without details, i can aswell just go play on a console at that point, it even got more details than "lowest". Ah yeah, i can play any classic games at 100 FPS (Crysis is still not Classic yet) without CPU being overutilized. I usualy cap at 60 FPS because above makes no sense, what you do is to run games at a unrealistic FPS amount, thats why CPU utilized so high.
Note: This was partially a take over from other topics because it was going to much into a unrelated theme for the given topic.