• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

9590 w/ 1070

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
A good way of see the difference is running 3DMark Vantage. In that benchmark, you can have the cpu run the physics or the gpu run the physics. The difference in physics scores between the two is huge (and then so are the total scores).

But loch said it very well, the difference.. +1 there. :)
 
More or less... but using a gpu for it is much faster. Physx in games is different than the physics in 3dmark. I just mentioned the 3dmark comparison so you can see the huge difference between cpu and gpu physics.
 
I guess I don't understand then why anyone would author a game that depends heavily on the CPU when a good GPU will handle the job much more easily. If physics is not the issue, then what is it that uses the CPU in games that cannot be done well by the GPU?
 
I have an 8350 @ 4.64ghz and sli 1070's and everything runs great. I came here from another forum where nobody ran amd everyone pushed i5 and i7 on people and if you said what I just did everyone would obsess over bottleneck. I get no stutter, no lag, so when people kept up about how much better an i3 was over an 8350 I had enough of their crap and left their site for good, deleted my account and I no longer recommend to people to check out their site.

A 1070 is a great card for that cpu, you might have some bn vs a 6700k considering amd is ddr3 and new intel is ddr4 but honestly you won't notice, nobody has noticed bottlenecking since single core cpu's when that actually meant something and anybody claiming such is not someone you want recommending anything to you. The only real instance of bn is running a game that requires more cores than your cpu has.

Also, with a pci slot blower fan blowing directly onto my nb (fits snug between my cpu heatsink and my gpu backplate) I got 3015mhz hyperthread stable by increasing cpu-nb voltage +.15, just something to shoot for if you're overclocking.

View attachment 184573

View attachment 184574



Let me guess..... Tom's Hardware? A dual core i3 is better than an 8 core processor.... I literally hate the word bottleneck. It is a word that is thrown around like the word download was not long ago. It is something used to explain a lack in performance when someone doesn't really specifically understand why. I am running two RX 480's with a FX 9590. Even when I hit high CPU usage in games like Dying Light (over 90%). Both GPUs are being used at 100%. People forget that there is power management built into today's GPUs. So use can fluctuate but have nothing to do with the performance of CPU. I know that The Division was giving me fits in CF and sometimes the FPS would lock to 17-20. Recently the developers patched the game, cut down on the CPU usage, added DX12, and fixed a few things. I am getting 60FPS @ 3440x1440 with none of the issues I experienced before.
 
A good way of see the difference is running 3DMark Vantage. In that benchmark, you can have the cpu run the physics or the gpu run the physics. The difference in physics scores between the two is huge (and then so are the total scores).

But loch said it very well, the difference.. +1 there. :)

Nvidia purchased the company that was creating chips that handle the Physics effects in games. Those chips are now a proprietary thing added to their GPU's. Chips that handle codecs or specific instructions at the hardware level are so much faster than say a CPU doing it through software. There is no way around using Physics in a game without having an Nvidia GPU and taking a performance hit. In fact KF2, you can't even see some of the effects since it is not available to AMD GPU's.
 
I bench Intel,AMD and Nvidia and I can tell you right up front the FX CPU will consistently score a lot lower in 3D portion of benchmarks with a high end GPU. You can call it whatever you like but it's still reducing the GPU throughput.
 
I bench Intel,AMD and Nvidia and I can tell you right up front the FX CPU will consistently score a lot lower in 3D portion of benchmarks with a high end GPU. You can call it whatever you like but it's still reducing the GPU throughput.


5 FPS is not exactly what I would call "a lot lower". I look at what I get and compare it to benchmarks with Intel and it is pretty close to the same. In my experience, I have learned that I should just rely on my experience in building a PC.
 
My gamer/HTPC is an FX-9370 and a GTX 980. I'm not an Intel fanboy just a realist. Even with DX12 it still hasn't closed the gap.
I never said you couldn't game with it, I do all the time. I'm just saying an 8 core FX up against an i7 (4c/8t) will not get the same output.
 
DX12 was added to The Division and with that change and lowering the CPU usage has put me at 60FPS average at 3440x1440. This a lot closer to benchmarks I have seen with an Intel CPU (5 FPS to be exact).
 
Many people don't realize that an overclocked high end 8 core AMD FX CPU doesn't even perform as well as a Haswell or Skylake i5k at stock when it comes to game performance. For some time now, the per core performance of the AMD FX line has been seriously behind that of Intel.
 
Last edited:
Nvidia purchased the company that was creating chips that handle the Physics effects in games. Those chips are now a proprietary thing added to their GPU's. Chips that handle codecs or specific instructions at the hardware level are so much faster than say a CPU doing it through software. There is no way around using Physics in a game without having an Nvidia GPU and taking a performance hit. In fact KF2, you can't even see some of the effects since it is not available to AMD GPU's.

Just to add some information to this....

Nvidia purchased the company Ageia Physx. Ageia was founded in 2002. NVidia made the purchase in 08.

Nvidia does not use Ageia "chips" on the GPU. They purchased Ageia for the software more so than the hardware. Nvidia Cuda cores handle the work that Ageia's PPU (Physx Processing Unit) could do. So there was no need to add a PPU to the PCB of the GPU.

Actually there are several older games that you can still use Ageia PPUs with, the most recent would be Ghost Recon Advanced War Fighter (2009) or the super hit title Unreal Tournament 3 (2008) with the Physx Map pack mod.

The Performance of the Ageia PPU was about the same as what a GT 9800 could do. In todays standard, that's weak. It was actually weak when released. The PPU was just barely big enough. Ran hot too.

The 8 series cards, ie: GTX 8800 Ultra, where the first cards to support Ageia/Nvidia Physx. Since we where just getting into quad core processing, you'd almost need the Ultra to enjoy the physx with some decent detail in the game.

The PPU or Cuda cores take a heavy offload from the CPU. Because it runs it's own thread and the Cuda cores are more efficient than a processor at this type of processing, this helps add particle count and randomness. The particle is now free to fall where it wants to instead of a set destination.

The Ageia PPU was a RISC processor.

I had 3 of them (multiple over) through the years. Burned up a couple of them too.

Asus PPU at 533mhz
DELL PPU at 500mhz
BFG PPU at 500mhz

Any Hoot. About the AMD cards and no NV Physx.

There was/is a way to run Physx with AMD GPU. I did it once or twice but with much older hardware than most of us use today. In a nut shell you install the NVidia card drivers including physx. You must copy and save the physx files including any in Win32. Then you uninstall the NVidia drivers and put the Physx files back where they belong except with one certain file in exception that needed to be placed elsewhere. (I do not remember exactly....) I believe the AMD dedicated graphics card drivers are installed After this procedure. But like I said, it's been a long time. quite some years now.

But for the most part and what I noticed through the years is people are not concerned about particle count. It's about that fine fine picture. Can we game 1440 yet? how clean can we get it all to look before we decide, you know what??.... wouldn't it be neat if there was 5,000 particles flying through the air? Total environmental realistic physics where size, weight, momentum, gravity, resistance and more actually make the game "feel" more realistic.

If Nvidia was smart, they could design a PPU for AMD users.......
 
Last edited:
Just to add some information to this....

Nvidia purchased the company Ageia Physx. Ageia was founded in 2002. NVidia made the purchase in 08.

Nvidia does not use Ageia "chips" on the GPU. They purchased Ageia for the software more so than the hardware. Nvidia Cuda cores handle the work that Ageia's PPU (Physx Processing Unit) could do. So there was no need to add a PPU to the PCB of the GPU.

Actually there are several older games that you can still use Ageia PPUs with, the most recent would be Ghost Recon Advanced War Fighter (2009) or the super hit title Unreal Tournament 3 (2008) with the Physx Map pack mod.

The Performance of the Ageia PPU was about the same as what a GT 9800 could do. In todays standard, that's weak. It was actually weak when released. The PPU was just barely big enough. Ran hot too.

The 8 series cards, ie: GTX 8800 Ultra, where the first cards to support Ageia/Nvidia Physx. Since we where just getting into quad core processing, you'd almost need the Ultra to enjoy the physx with some decent detail in the game.

The PPU or Cuda cores take a heavy offload from the CPU. Because it runs it's own thread and the Cuda cores are more efficient than a processor at this type of processing, this helps add particle count and randomness. The particle is now free to fall where it wants to instead of a set destination.

The Ageia PPU was a RISC processor.

I had 3 of them (multiple over) through the years. Burned up a couple of them too.

Asus PPU at 533mhz
DELL PPU at 500mhz
BFG PPU at 500mhz

Any Hoot. About the AMD cards and no NV Physx.

There was/is a way to run Physx with AMD GPU. I did it once or twice but with much older hardware than most of us use today. In a nut shell you install the NVidia card drivers including physx. You must copy and save the physx files including any in Win32. Then you uninstall the NVidia drivers and put the Physx files back where they belong except with one certain file in exception that needed to be placed elsewhere. (I do not remember exactly....) I believe the AMD dedicated graphics card drivers are installed After this procedure. But like I said, it's been a long time. quite some years now.

But for the most part and what I noticed through the years is people are not concerned about particle count. It's about that fine fine picture. Can we game 1440 yet? how clean can we get it all to look before we decide, you know what??.... wouldn't it be neat if there was 5,000 particles flying through the air? Total environmental realistic physics where size, weight, momentum, gravity, resistance and more actually make the game "feel" more realistic.

If Nvidia was smart, they could design a PPU for AMD users.......

I was not implying that there is a separate physical chip in a the GPU, but Nvidia didn't buy a hardware company for their software..... There is a distinct difference between hardware and software Physx. There are no AMD GPU's available that can run Physx because it is an instruction set that is built into the GPU. You did not run Physx on a non Nvidia GPU. You only have 2 options with an AMD card when you want to use older PhysX. If you enable PhysX on an Nvidia GPU the CPU will do all of the processing. That is called "software" PhysX because you do not have the hardware instructions in a GPU to run it natively so there has to be some emulation. That creates a lot of overhead so it will be a definite performance hit. Another method, is to use 2 types of GPU's in your system. You can have an AMD GPU as your primary card, but then have a Nvidia card (usually old or cheap) as the second. You would then have to install modified drivers, but you could get it to work. The AMD GPU would handle all of the Graphics but the Nvidia card would only handle only PhsyX.

Of course the PPU's were RISC chips lol. They were only made to run PhysX. That is what a RISC processor is, it is a chip used to run a simple set of instructions. Very different than our CPUs. If you know anything about Nvidia they will not design anything for AMD. It is their competition and they take ever edge they can get over it. Now with the new API's able to use multiple GPUs (DX12, Vulkan) natively, we may be able to use a a Nvidia card just for PhsyX again.
 
The Driver it'self is software. You cannot run any hardware without software. Physx has environmental elements for example. How would a particle know what to do if there wasn't some form of coding?

Instruction sets run software..... SDK and physx engine.

RISC processor is not much different than a GPU. The AMD card could very well run Physx but AMD does not have the rights to the phsx engine and therefor cannot implement it.

Modern X86 processors are probably strong enough today to run a Physx engine, but again NVidia simply owns the rights to this... unless AMD created their own Physx engine.

SDK by the way runs on AMD. I think we are at version 3.0. A newer version than the original SDK that was used on a RISC processor by Ageia for physx.

There's no reason to run multiple cards for Physx while NVidia's newer larger GPUs can handle the task alone.

Want Physx, gotta by Nvidia. Because that's who owns the rights to the drivers, physx engine and so on.

You cannot run modern physx on older Ageia cards because the driver support was dumped on Nvidia's purchase.

And people that buy AMD GPU seemingly have no interest or very little interest in Physx or they would have purchased NVidia.... Im'a guessing?!
 
The Driver it'self is software. You cannot run any hardware without software. Physx has environmental elements for example. How would a particle know what to do if there wasn't some form of coding?

Instruction sets run software..... SDK and physx engine.

RISC processor is not much different than a GPU. The AMD card could very well run Physx but AMD does not have the rights to the phsx engine and therefor cannot implement it.

Modern X86 processors are probably strong enough today to run a Physx engine, but again NVidia simply owns the rights to this... unless AMD created their own Physx engine.

SDK by the way runs on AMD. I think we are at version 3.0. A newer version than the original SDK that was used on a RISC processor by Ageia for physx.

There's no reason to run multiple cards for Physx while NVidia's newer larger GPUs can handle the task alone.

Want Physx, gotta by Nvidia. Because that's who owns the rights to the drivers, physx engine and so on.

You cannot run modern physx on older Ageia cards because the driver support was dumped on Nvidia's purchase.

And people that buy AMD GPU seemingly have no interest or very little interest in Physx or they would have purchased NVidia.... Im'a guessing?!


Hmm drivers are very very different than an actual game engine. Microprocessors have a instructions that they can run at a hardware level that allow for fast execution of specific code. If it isn't there, then you have to go to software. That is why when you have an AMD GPU either you can't at all run stuff like PhsyX Flex or if it is allowed you can run it via the CPU which is at the software level. If you check the specs on any processor there will be a specific section where it has hardware instructions. For example, in order to save costs the Raspberry Pi ships without the ability to run MPEG-2 decoding instructions at the hardware level. They did this to cut costs on the Raspberry Pi's. You can still run MPEG-2's but you will have a performance hit because it cannot use the hardware level instruction set. You can purchase a license though and you will receive a key which will allow you to do the decoding on a hardware level greatly improving the performance. Yes, you need the software for PhysX but if your hardware cannot natively run those instruction sets, you are going to have to do it via software. Meaning that there is some sort of emulation happening and creating overhead. I do have interests in having access to PhsyX Flex for example on Killing Floor 2, because realistic giblets flying all over. Currently though, there is no option if you have an AMD processor. Your welcome btw, for me making sure that there is competition with Nvidia.

I think you are mostly right. One solution that Nvidia can do is sell a license for PhsyX to allow other GPU makers to include it.
 
Somehow I have a feeling like PhysX itself is quite dead on the current market. Even though some games are using it, barely anyone cares to add support for it in new games. If there are physics calculations then in most cases by direct compute or something like that and CPU is performing these calculations ( like all last 3DMarks use it for physics tests ). There are/were games which are/were using 2 cores for base game and 2 cores for physics only ( I think it was earlier Unreal but don't remember now ).
Recently most games are designed for consoles as there is the highest profit. Consoles have no physx support so most games which are designed for xbox/ps simply don't have physx support on pc. There are always exceptions.

Simply if modern CPUs can handle physics calculations without physx license from nvidia then who cares about physx ...
 
5 FPS is not exactly what I would call "a lot lower". I look at what I get and compare it to benchmarks with Intel and it is pretty close to the same. In my experience, I have learned that I should just rely on my experience in building a PC.
go look at some reviews at techspot... they test multiple cpus. In many titles with a single card, the gpu is held back (since you don't like the term bottleneck) by the cpu. Add another gpu, and it gets worse. They can't keep up.. glass ceiling. Period.
 
Back