• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

So fx4100 is a horrible fluke

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Did I miss something here? You combined a $500 video card with a $100 processor right?

What did you expect?
Will a Ferrari on Walmart tires set fast lap times?
Will the best speakers you can afford sound great attached to a cheap amplifier from Walmart?

I'm with everybody else on the "this is how MMO's like this perform" thing but honestly, why would you combine a premium product with a value product and expect something great?
 
I would say teh game.

The one person that has done testing in this thread is using a slightly weaker GPU and a MUCH higher resolution monitor.

I have not done any testing but have not seen a game in the last few years that stays pinned at 100% GPU on stock or OCed GPUs or CPUs.

If you are never hitting 100% GPU in your game, I would say that is more indicative of you have a lot more headroom on your ingame details settings. Maybe throw it up to ultra settings, or increase AA/AF or visual distance.

I do not think CPU bottle necking has been an issue since 1680x1050 or definitely not since HD resolution became mainstream (mainstream being $100 monitors)

HOWEVER, there are games that still use CPU physics. Those ARE CPU bottlenecks that may affect GPU performance. But Supreme commander is one of those FEW games that is known to actually be CPU demanding isn't it?
 
I would say teh game.

The one person that has done testing in this thread is using a slightly weaker GPU and a MUCH higher resolution monitor.

I have not done any testing but have not seen a game in the last few years that stays pinned at 100% GPU on stock or OCed GPUs or CPUs.

If you are never hitting 100% GPU in your game, I would say that is more indicative of you have a lot more headroom on your ingame details settings. Maybe throw it up to ultra settings, or increase AA/AF or visual distance.

I do not think CPU bottle necking has been an issue since 1680x1050 or definitely not since HD resolution became mainstream (mainstream being $100 monitors)

HOWEVER, there are games that still use CPU physics. Those ARE CPU bottlenecks that may affect GPU performance. But Supreme commander is one of those FEW games that is known to actually be CPU demanding isn't it?

The problem here is that games are primarily 1,2,4 threaded. When a GPU can only pull 50fps max in a game the difference is not going to be very big most of the time, but a lot of time you see 10-20% difference in average FPS with AMD vs Sandy and Ivy. If all games were 8 threaded, and efficiently (80+% CPU load) then AMD FX would be great for gaming, minus still absurd power consumption compared to intel.

A 4.6 GHz FX-8150 is only going to match about a stock 2600K in average frame rate with decent midrange or higher end GPU.

Furthermore when you get into Eyefinity/Surround the CPU means a lot...like 10 fps even when you average 40-50fps. :eh?:

While games can be CPU or GPU limited, unfortunately you get to the point where the CPU is not enough no matter what, just like when I owned an Athlon 64 X2 5600+, even at 3.4 GHz it was falling pretty darn far behind Conroe when paired with anything much higher than Nvidia 8800/mid 9000 series or HD3800 series...
 
We are focusing on an FX-4100, which performs variously as 2 core or some semblance of 4 core.
I am becoming curious, as the FX-4170 is supposed to perform much better, I wonder what the results would be?

Most benchmarks do not list choke points by game/ GPU/ CPU combination.

Curious where the CPUs in this price range perform.
 
We are focusing on an FX-4100, which performs variously as 2 core or some semblance of 4 core.
I am becoming curious, as the FX-4170 is supposed to perform much better, I wonder what the results would be?

Most benchmarks do not list choke points by game/ GPU/ CPU combination.

Curious where the CPUs in this price range perform.
To sum this platform up in a nutshell, "ADD version"

Multi-threaded scaling above 2 cores on FX-41xx series is reduced dramatically due to the nature of this architecture and the way resources are shared within the CPU. Single thread are bad too, but nothing they can do about it.

For example, Cinebench 11.5:
The FX-8150 scales about 6.6-6.7x multi-threaded over the full 8 cores,
http://www.xtremesystems.org/forums...in-server...&p=5016412&viewfull=1#post5016412
Compared to 5.7x for AMD Phenom II X6 and 3.8 for Phenom II X4. I tested this, I'm not going to look up those results again.

However this is the kind of effect that the Windows 7 Hotfix had, by making sure the scheduler picked cores in separate CUs so that they did not share resources:
The updates did not help single thread performance, unless you were running more than 4 single threaded applications, the performance would decrease in each application.
 
I apologize for this double post but I feel it is necessary to separate information here and will clarify better how the FX Platform works.

While I know the discussion here is about the FX-4100, what I say here applies for all AMD FX-4/6/8-series CPUs. However, when I speak in this post, I am referring specifically to the FX-8000 series chips in core count.

Within FX, you have half of a full quad core, and half of a full eight core. You have a quad core on the side of the CPU that does Floating-Point Calculations, and an eight core on the side that does integer calculations. It is a very interesting design. AMD calls the section which houses the "real quad core floauting point units" which execute floating point calculations a CU. Inside this CU is essentially one full core for floating-point calculations. Unlike previous designs, this FPU core can execute up to 2 threads, AMD calls this CMT. This works kind of like Intel's SMT (Hyper-Threading), but is done in hardware. The problem here is that this CPU is marketed as an 8 core, not a 4 core with Super-Ultra Hyper-Threading-Like stuff. There are 4 CUs in the CPU.

Within the other half of FX, is a true 8 core design. Within the two "CU"s, are also two real, full integer units, which of course, do integer calculations. These execute 1 core each and there are 8 of them.

The problem here are that many application workloads are not just one or the other.
Lets say you have a single thread app, that is integer heavy. If what I said about the integer part is true, doesn't that mean this runs very slowly? Not exactly, because the FPU part there is twice what this mini, but full, integer part is.

Same goes for FPU heavy calculations. Do they run fast? No, they are bottlenecked by the not as fast, but highly threaded integer parts.

Furthermore, you may be wondering why in pretty much every application, FX loses to or just equals Phenom II.
AMD had to give up a bit on that beefy Phenom II FPU, they actually redesigned a new one so that it can execute 8 threads, but slower if need be. (Like intel SMT, though intel goes about it a completely different way.)

So what you end up with something that is very technically brilliant on paper but could not be executed brilliantly in real product.
However, the idea is priceless. If only AMD had single thread performance like, say, intel Sandy Bridge, they would have an absolute monster that would smash even intel's 6 core, 12 thread Ivy Bridge-E CPU. Because anything using up to 4 threads would perform similar or better to intel without Hyper-Threading, and anything higher than 4 threads thanks to CMT, would scale much better than intel's Hyper-Threading.

However AMD is not even half the company in terms of budget, R&D and brainpower to speed up this architecture.

However the base idea of how the FPU and integer cores, they can use for a long time, even if they don't decide to use the same structure of their cores in upcoming designs.



Now, on to this works and can be seen in the real world:


All results 6 thread or less are marginally slower, (though there is a small margin of error) and 7-8 threads are faster by a small amount. Mind you, I ran tests twice this time for unpatched (wPrime) and kept best results.
1-4 were consistently slower than with the hotfix. I don't have time to test the other two benchmarks tonight unfortunately.

I'd also like to note that I got a BSOD earlier at idle with the hotfix installed. I hadn't gotten one without it at these clocks and it's been almost a week.

Myself said:
Tests after Windows 7 Patches, including thread scheduler update that knows what cores are in what modules instead of scheduling a 2 thread workload in 1 module with shared resources.

Test System:
AMD Eight Core FX-8150 @ 4.69Ghz / 2.51Ghz CPU-NB
2x2GB DDR3-2133 CAS 7-10-7-27 160ns 1T
ASUS Crosshair V Formula
2x Western Digital Caviar Black 640GB WD6401AALS in RAID 0
XFX Black Edition 850w (Seasonic 850w M12D) 80 Plus Silver
2x AMD HD5770
Windows 7 Professional 64-bit SP1

Core Parking ON

wPrime 32M v1.55 -


1 Thread: 44.896 sec
2 Thread: 22.586 sec
3 Thread: 15.114 sec
4 Thread: 11.684 sec
5 Thread: 10.017 sec
6 Thread: 8.815 sec
7 Thread: 7.938 sec
8 Thread: 7.673 sec

1 to 4 Thread Ratio: 3.842x
2 to 4 Thread Ratio: 1.933x

1 to 8 Thread Ratio: 5.851x
4 to 8 Thread Ratio: 1.523x




Cinebench R11.5 -

1 Thread: 1.16 pts
2 Thread: 2.30 pts
3 Thread: 3.42 pts
4 Thread: 4.44 pts
5 Thread: 5.31 pts
6 Thread: 6.14 pts
7 Thread: 6.93 pts
8 Thread: 7.68 pts

1 to 4 Thread Ratio: 3.82x
2 to 4 Thread Ratio: 1.93x

1 to 8 Thread Ratio: 6.62x
4 to 8 Thread Ratio: 1.72x




7-Zip 9.20 - AES-256 Encrypted 10 Char. Password - 1003MB 200 File JPEG deflate to .ZIP

1 Thread: 8:14s (494s) 2030 KB/s
2 Thread: 4:12s (252s) 3980 KB/s
3 Thread: 2:46s (166s) 6040 KB/s
4 Thread: 2:15s (135s) 7430 KB/s
5 Thread: 1:52s (112s) 8955 KB/s
6 Thread: 1:38s (98s) 10234 KB/s
7 Thread: 1:29s (89s) 11270 KB/s
8 Thread: 1:21s (81s) 12382 KB/s

1 to 4 Thread Ratio: 3.65x
2 to 4 Thread Ratio: 1.86x

1 to 8 Thread Ratio: 6.09x
4 to 8 Thread Ratio: 1.66x

Myself said:
Non-patched (Win 7 Scheduler is not recognizing shared resource "cores" or "cores" in separate CUs / patched wPrime 32M results which take into account that Core X is in CU 1 and Core Y is in CU 2, take into account small margin of error (1 thread should be equal for wPrime):
1 Thread: 45.116 sec / 1 Thread: 44.896 sec
2 Thread: 22.869 sec / 2 Thread: 22.586 sec
3 Thread: 15.725 sec / 3 Thread: 15.114 sec
4 Thread: 12.098 sec / 4 Thread: 11.684 sec
5 Thread: 10.640 sec / 5 Thread: 10.017 sec
6 Thread: 8.924 sec / 6 Thread: 8.815 sec
7 Thread: 7.831 sec / 7 Thread: 7.938 sec
8 Thread: 7.410 sec / 8 Thread: 7.673 sec

4 thread Cinebench R11.5 unpatched:
4.30 pts

Patched:
4.44 pts

4 thread Unpatched 7-Zip 9.20:
2:20

Patched:
2:15
 
Last edited:
The results are intesting, and known, generally 8120/8150,on a top tier motherboard, overclocked, with overclocked ram does quite well.
The question remains what to expect from a modest rig using an FX-4100 playing GW2 with a HD7950?
 
I would not recommend it. An overclocked Phenom II X4 beats out AMD FX-6000 series both stock and overclocked in almost every gaming scenario lol.
Keep in mind Core 2 Quad and Phenom II X4 were equals most of the time with slight edge to Core 2 Quad too....

Pairing an FX-4100 with HD7950 is just like pairing a Core i3 2100 with a 7950. Those kind of things leave me scratching my head.

The FX-4100 at stock is close to a stock A8-3870K. The 3870K starts bottlenecking GPUs once you pair up a 6670 with the 6550D in it (does not scale like it should), and you actually see GPU average FPS gains just by increasing CPU multiplier/frequency with just the 6550D. Overclocked, it gains a bit of an edge in gaming, but not everything.

That tells me that I should not pair the FX-4000 series with anything more than a 6870 or 7770, because the CPUs are not up to par with high end cards.
 
Actually the I3 2100 outperforms the A-8 3870K as a CPU and does considerably better matched with a high end discrete video card.
The A-10 and A-8 Trinity are much closer (comparison I saw was with the I3 3220, a close performance).

Phenom II X4 beats FX-4100 for the most part, but the FX-4170 is the equal and sometime better performer for gaming.

There was an article using I3 2120 with an HD 7970, with the exception of a few CPU intensive games it did extremely well.

For a budget system "balance" in performance can be a goal, on the other hand maximizing a video card gives strong performance with many (most) games. There is always a possibility of upgrading the CPU for more performance.

If you want details check out tomshardware quarterly "System Builders Marathons" (yes it is a bit bubbly, and newegg based, and dated by publication date at times).
You can compare various ~$500 builds and variations game frame rates, comments are sometimes better than articles.
Also, minimum frame rates are the most important criteria for playability.
 
Last edited:
Phenom II X4 beats FX-4100 for the most part, but the FX-4170 is the equal and sometime better performer for gaming.

I'm interested to see this. I have never seen any such compassion around so I derived my opinion from compassion of the 1100T X6 and the FX-8150. In those comparisons the x6 frequently had similar gaming performance stock vs stock and it's important to note that it did so with 2 less cores. This leads me to believe that when you compare an equal number of Phenom II cores to an equal number of FX cores then the FX is in a no win situation. Especially IMO because the FX does not scale as well in many situations as more cores are used.

Here is one example.

I'm open to changing how I think so long as you can back up what you say. :D
 
He has guild wars 2 running in the background.

mmorpgs are notoriously cpu-bound and not gpu bound. not a big deal

GW2 is GPU bound but when you go into world vs world it shifts to CPU bound for some dumb reason, everyone on the GW2 forums is complaining about this very thing because even with the best system (such as an i7 3930K or 980X) has low fps in world vs world. For me GPU usage in world vs world is like 20-25%, in the rest of the game, 75-80%.

Here's while in Lion's Arch:
http://db.tt/cXGqpAs8

Here's while in a large fight in WvWvW:
http://db.tt/CR3eUvdb
 
Last edited:
GW2 is GPU bound but when you go into world vs world it shifts to CPU bound for some dumb reason, everyone on the GW2 forums is complaining about this very thing because even with the best system (such as an i7 3930K or 980X) has low fps in world vs world. For me GPU usage in world vs world is like 20-25%, in the rest of the game, 75-80%.

Here's while in Lion's Arch:
http://db.tt/cXGqpAs8

Here's while in a large fight in WvWvW:
http://db.tt/CR3eUvdb


I'm having something like this too, but in WvW I'm crashing from memory errors after being in for 30 minutes (all stock settings, OC's turned off)

But no matter what, I'm always at 35-45 frames no matter where I am in the game, or what I'm doing.
 
So did some testing,

Call od Duty MW2/MW3
Civilization 5 + GandK expansion

all run at maximum available settings with a 580 GTX at stock and a 4GHz 3930K with a 16GB of 2133 C9 memory quad channel


highest GPU usage I have seen at 1920x1200 resolution, including second monitor running at 1680x1050 with monitoring software. 60%.

Highest CPU usage? 20%. I know this is straying from where the thread was heading, sorry about that :) Just finally got to play some games on my i7 :)
 
Last edited:
The question remains what to expect from a modest rig using an FX-4100 playing GW2 with a HD7950?

I just talked to someone the other day that has a FX4100 and plays GW2, he's not happy with it. But I think GW2 is at fault right now because a lot of people are having problems with that game, the halloween event tampers with the game's performance, we'll see what happens once they patch out the events. For me with my X6 1090T the game ran great from launch up until the October 1 patch and worsened with the October 7 patch and now terrible with the halloween event patches. I'm a heavy WvWvW'er and had no issues in 50+ player battles before October with +35fps, and now I get 11-18fps in them.
 
Back