• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Is this the wrong CPU

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
All great points!

As far as proof, I linked up a review in my post that should satisfy that question. :)

Yes. The FX 4XXX CPUs are a definite bottleneck for any high end GPU. Especially two of them. Just like an Athlon 860K is a bottleneck on a high end card, or an FX6300...
It's well known that to get the max performance out of your high end GPUs you should have an i5 or i7.
 
Not really on single GPUs...http://www.anandtech.com/show/6934/choosing-a-gaming-cpu-single-multigpu-at-1440p/5

Perhaps that is a bit different at 1080p though...?

I've seen much more drastic comparisons where the 4 core FX is down by like 20-30% vs the 4 core i5/i7. I believe these were at 1080P that I'm thinking of. Regardless, even the graphs you linked show a pretty drastic difference. Not new purchase enducingly drastic mind you but pretty significant nonetheless.

http://www.ocaholic.ch/modules/smartsection/item.php?page=0&itemid=1117

Here's a comparo that is fairly damning of the 4350 at times. Especially at lower resolutions (*at 1024P the 4350 is 40% behind the i5 in Battlefield). Not as drastic as I remember seeing so maybe I was hallucinating. Still, I wouldn't want that FX 4 core. Especially for CPU intensive stuff like MMOs or Starcraft.
 
Last edited:
I see what you are saying in CPU heavy (RPG type) games. But outside of that, they are pretty darn close, so it seems the game really makes the difference there. ;)
 
I think you are correct E_D and as well the OP has decided what HE is going to do which is always a plus since he is the one got to spend the coins.
RGone...ster.
 
@RGone I don't think I could ever just closet components, they might sit in a box for awhile but if I got the components I want to use them.

@E_D such an interesting page there, I guess I should just take one of the cards out and use it as a single gpu system, look towards maybe building another system with components that will utilize each other better.

I have decided to keep the clunky 4350 for the interim, being as its xmas and coin is stretched already. I'm going to try and OC this cpu to see where I can get with it, for nothing else other than a learning experience, I won't go to crazy though. But this CPU would be better equipped with say a r270-280 gpu and oc'd a tad than it is with the two 290's, if I were to build a new system, centered around the 290's it would be better to go with the competition for its ability to scale multy gpu's better, is this a correct summary of what I should of learned from all this?
 
@RGone I don't think I could ever just closet components, they might sit in a box for awhile but if I got the components I want to use them.

@E_D such an interesting page there, I guess I should just take one of the cards out and use it as a single gpu system, look towards maybe building another system with components that will utilize each other better.

I have decided to keep the clunky 4350 for the interim, being as its xmas and coin is stretched already. I'm going to try and OC this cpu to see where I can get with it, for nothing else other than a learning experience, I won't go to crazy though. But this CPU would be better equipped with say a r270-280 gpu and oc'd a tad than it is with the two 290's, if I were to build a new system, centered around the 290's it would be better to go with the competition for its ability to scale multy gpu's better, is this a correct summary of what I should of learned from all this?

I think you summarized your plans well and expressed it correctly. Sounds like reasonable approach to me. Keep us posted.
 
if I were to build a new system, centered around the 290's it would be better to go with the competition for its ability to scale multy gpu's better, is this a correct summary of what I should of learned from all this?

Keep in mind that you don't need to build a new system per say. All you need is a new CPU/Mobo or a new CPU. In either case your RAM, GPUs, storage, case, power supply, etc would totally be reusable.

And yes I would say that's exactly the summary of this thread :).
 
PCI-E 3.0 is a non factor. the bandwidth isnt usable by anything around today.

the i5 is pointless. an 8 core i7 will certainly outperform an 8 core FX in almost any situation but an i5 will not. the i5 has superior IPC but its cost is not borne out in its performance.

Ive overclocked the living crap out of my 6 core zambezi. I use BDC to improve my performance and honestly get phenomenal performance for my investment. An 8 core vishera would be a modest impovement in performance but even an i7 3770k would be a notable performance boost.

The "true core" everyone spouts about is nonsense. The FPU wasnt an integral part of a cpu for decades.

the truth is that floating point isnt used as often as integer commands. The weakness in the architecture is that the cores arent well fed by the scheduler. a 6 core FX can do 6 integer commands, or 3 integer and 3 FP commands, or a few variations thereof at one time. The IPC is what lets it down. Intel can do more instructions per clock cycle - this helps offset the frequency difference between intel and AMD in general - and the FX cores (especially zambezi) are not efficiently kept busy - further impacting real world IPC.

The older 4 core (w/hyperthreading) i7 series was only slightly ahead in IPC versus the newer vishera core AMD chips but the scheduler keeps their real and virtual cores fed better. The only time AMD really looks good is in applications that keep 8 cores quite busy with mostly integer commands (this hinders the intel virtual cores because they arent physical cores at all and if you fill a core there is no idle time to work on a second thread).

The BDC app really helps the older zambezi core AMD FX chips to come alive BTW. i was told it also helped with scheduling on vishera core chips too.
 
PCI-E 3.0 is a non factor. the bandwidth isnt usable by anything around today.

the i5 is pointless. an 8 core i7 will certainly outperform an 8 core FX in almost any situation but an i5 will not. the i5 has superior IPC but its cost is not borne out in its performance.

Ive overclocked the living crap out of my 6 core zambezi. I use BDC to improve my performance and honestly get phenomenal performance for my investment. An 8 core vishera would be a modest impovement in performance but even an i7 3770k would be a notable performance boost.

The "true core" everyone spouts about is nonsense. The FPU wasnt an integral part of a cpu for decades.

the truth is that floating point isnt used as often as integer commands. The weakness in the architecture is that the cores arent well fed by the scheduler. a 6 core FX can do 6 integer commands, or 3 integer and 3 FP commands, or a few variations thereof at one time. The IPC is what lets it down. Intel can do more instructions per clock cycle - this helps offset the frequency difference between intel and AMD in general - and the FX cores (especially zambezi) are not efficiently kept busy - further impacting real world IPC.

The older 4 core (w/hyperthreading) i7 series was only slightly ahead in IPC versus the newer vishera core AMD chips but the scheduler keeps their real and virtual cores fed better. The only time AMD really looks good is in applications that keep 8 cores quite busy with mostly integer commands (this hinders the intel virtual cores because they arent physical cores at all and if you fill a core there is no idle time to work on a second thread).

The BDC app really helps the older zambezi core AMD FX chips to come alive BTW. i was told it also helped with scheduling on vishera core chips too.

I bow to your superior knowledge, sir.
 
Yes. The FX 4XXX CPUs are a definite bottleneck for any high end GPU. Especially two of them. Just like an Athlon 860K is a bottleneck on a high end card, or an FX6300...
It's well known that to get the max performance out of your high end GPUs you should have an i5 or i7.

I agree, with most of what you're saying but not all. That graph that E_D linked , I assume was done with all CPUs at stock? I can't speak for a quad FX but I've ran many 8 cores so far and I just want to point out what was already said if the game is CPU intensive, the Intel will come out on top but just as an example. In Catzilla 1440 benchmark, my 9370 with 2x770gtx kicked the pants off my 4770K by 20% using the same cooling.
http://hwbot.org/submission/2519724_johan45_catzilla___1440p_2x_geforce_gtx_770_12100_marks
http://hwbot.org/submission/2515292_johan45_catzilla___1440p_2x_geforce_gtx_770_10103_marks
 
Last edited:
Ahh, OK it's funny how different benchmarks look at these CPU differently, Firestrike doesn't see it as eight cores in the combined test. At 720 they were pretty much identical, but I did find the key to making the AMD perform with the SLI is the HT bus. I was running it at 3600 compared to a 2600 stock speed.
 
Ahh, OK it's funny how different benchmarks look at these CPU differently, Firestrike doesn't see it as eight cores in the combined test. At 720 they were pretty much identical, but I did find the key to making the AMD perform with the SLI is the HT bus. I was running it at 3600 compared to a 2600 stock speed.

Can you elaborate? This is new information to me.
 
Yep! Catzilla likes cores is why. 8 "real" cores vs 4 real cores + 4 HT. :)

Catzilla is badly designed benchmark that's why it likes weird configs. Really check it on various setups and you will see that sometimes 4 cores give better scores than 6-8 cores even though some parts of the benchmark look better. Also every version if catzilla is acting in a different way. Old version + GTX900 cards = bugged scores. The same with some AMD drivers and catzilla versions. Bugged scores = last tests are much higher than they supposed to be as there are black screens or delays between tests with black screens when test is counting max possible FPS even though you can't see it.
I'm not saying here that AMD is slow etc but that catzilla is bad example of benchmark for comparison.

As I said many times, I just can't see AMD beating Intel in anything at similar price point. The only good option can be cheapest AMD quads if you are building budget gaming pc for games which need more than 2 threads.
Games are still barely using anything above 4 threads so i3 or i5 is the best option for most gamers, especially when you stick to 1080p ( as 90%+ gamers ).

Looking at AMD price you are not considering need of better cooler, proper motherboard ( which cost more than regular ) and additional things which may cause issues during daily work. AMD is not bad but you have to know what are you buying. Most users are not aware of that and later we see many threads in AMD section about various issues.
 
Last edited:
Looks like it's only catzilla behaviour. HT is generally not helping in anything ( it's also mentioned in about every good FX OC guide ).
 
That was a one off trial, I had a goal to beat my 4770k. Didn't try it with anything else to be honest. Maybe I'll revisit it this winter when I have the time and see if there really is something to it or just the benchmark. My gut feeling is it does, how much remains to be seen. I have my HTPC set up for gaming now and once I have a few more titles downloaded I can do a FPS comparison I suppose. It has normal cooling , the 770's and an FX 8320 in it. That Chip is fairly new to me so I don't know it's limits yet but I'm sure I could do something and games would likely have the most real world difference.

Looks like it's only catzilla behaviour. HT is generally not helping in anything ( it's also mentioned in about every good FX OC guide ).

Yes but they're not talking about SLI set up either.
 
Back