• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

AMD going in the wrong direction?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

xreaper101

New Member
Joined
Aug 19, 2012
Hey everyone, this is my first post so bare with me if I do anything improperly.

I'm looking to build a whole new computer from scratch, for the most part, i'm great at putting all that together so that isn't a key issue, but i was looking into those new amd FX chips, they looks absolutely amazing on paper. I liked this quad core, 4.2ghz(stock) cpu and planned out a whole build, but as i looked into it deeper, i found that the intel i7 2500 stock can out perform it in gaming. Next i found out that they focused on an aspect(i cant remember exactly) that video games can hardly utilize, and best of all(oh lordy) NONE of their boards have a PCI-e 3.0 slot!! I really was looking into putting a 680 or a 7970 into my next beast, but i don't want to spend all that to be bottlenecked! :bang head

In a nutshell, can someone help me regain some hope in AMD's gaming dept? I love the high ghz for the low price in all the AMD chips, but i'm strongly considering intels new, very pricey 2011 socket with the upcoming ivy bridge cpus that will be fitting in it(i'm aware of the 1155's that are released).

Thank's in advance everyone, you have a great website that i've gotten help from before without posting personally.
 
As far as I know, even an i5 2500k stock will be a lot faster in games than either an FX-4170 or FX-8150, even with the latter two being overclocked, I've seen countless benchmarks attesting to this. And I hate saying it because 3 months ago I was in the same boat as you while building a new computer. All I can say CPU wise is to wait 2-3 months until AMD releases Piledriver Vishera cores, and let's hope that the high end Vishera can at least compete with the i5 2500k (since I don't think it's possible for it to compete with Ivy bridge).

As far as the PCI-E 3.0, as far as I know, it's irrelevant. No modern card even fully saturates a 2.0 slot, let alone 3.0. Don't worry about it.
 
You've essentially posted in here and said Nissan is doing something wrong because the 370Z doesn't stack up against a car costing $30000 more.....though this could be an issue more with the common misunderstanding of Ghz. There is a lot more to a processor than just that one thing that the marketing department loves.

At the very least you should be comparing the 2500 to an 8150 as the 8150 has 2x as many cores as the 4170 and is still $20 cheaper than a 2500k. If you have the extra money to spend and the regular computer user needs then by all means you should grab a 2500k or a 3570, they frequently offer better performance to the average user and they do so with less power draw. The only real downside is that you will pay $30-150 more for the processor and motherboard combination. You should also be aware that the number of processor cores in the FX lineup do not scale as well as they would in a more traditional processor because they are sharing some resources.

As for PCI-e 3.0 I didn't think either of the cards you mentioned were bottlenecked by PCI-e 2.0 and I'm pretty sure they aren't.

The only thing I've enjoyed about my FX-6100 so far is that the 6 cores give me a bit more freedom in multitasking, for example folding on 4 cores and using my machine to run 2 EvE Online Clients. Even then I'm kicking myself for not buying the last generation 6 core which I would even choose over an 8150.

Still, there are some niche things that the FX lineup is good at, but over all it hasn't been a great offering.
 
Well, I'm clueless in one area for sure- Say i want to use SLI or Crossfire, doesn't that demand at least two x16 slots? For the life of me i can't find an AMD board that doesn't say something like "PCI-E 2.0 x16: 2(x16, x0 or x8, x8)" That may refer to something totally different, but it seems that it spreads it out?
 
Well, I'm clueless in one area for sure- Say i want to use SLI or Crossfire, doesn't that demand at least two x16 slots? For the life of me i can't find an AMD board that doesn't say something like "PCI-E 2.0 x16: 2(x16, x0 or x8, x8)" That may refer to something totally different, but it seems that it spreads it out?

Current video cards are about the speed of PCIe 1.0 x16 = PCIe 2.0 x8 = PCIe 3.0 x4.

AFAIK all AMD 9xx chipset boards do PCIe 3.0.

AMD is behind right now because they switched to a completely different type of architecture, with the cores sharing parts and becoming modules and such. Since its a new design in its first stages, it isn't doing so well. However, there is much room for improvement and progress in the future.
 
Last edited:
Current video cards are about the speed of PCIe 1.0 x16 = PCIe 2.0 x8 = PCIe 3.0 x4.

AFAIK all AMD 9xx chipset boards do PCIe 3.0.

AMD is behind right now because they switched to a completely different type of architecture, with the Corea sharing parts and bring modules and such. Since its a new design in its first stages, it isn't doing so well. However, there is much room for improvement and progress in the future.


+1.

Time will tell if its the wrong or right direction, its just to early to say. :)
 
At the end of the day AMD are going to trail Intel in most performance metrics for the foreseeable future. Regardless of their innovative designs, Intel's huge lead on process tech over everyone else in the industry will see to that.

AMD's current offerings don't offer much to gamers either but they do come at a great price, overclock well and are decent for heavily threaded tasks. Piledriver is most likely going to offer a 7-10% performance boost and slightly improved clocks (these revisions almost always do) but it isn't going to transform the FX series.

If you are a gamer I would go for Sandy Bridge (Ivy Bridge isn't much faster, is too hot and sucks for OCing). If you are serious about 680 CF then Sandy Bridge E is your best bet though I wouldn't bother myself.

However none of what I said diminishes the value of the FX series for general purpose workstations, media encoding boxes and higer end home servers (if you have cheap electricity :p ).
 
Last edited:
Well, I'm clueless in one area for sure- Say i want to use SLI or Crossfire, doesn't that demand at least two x16 slots? For the life of me i can't find an AMD board that doesn't say something like "PCI-E 2.0 x16: 2(x16, x0 or x8, x8)" That may refer to something totally different, but it seems that it spreads it out?

My MSI 990fxa-GD80 can do quad sli or crossfire and has two (2) x16 PCI-e slots. I know because I run 2 GTX 480's SLI. I know of NO INTEL board that does dual x16s. If I'm wrong, gimme an example so I can look it up.
 
I would like to say that GAMER means different things to different people. INTEL is better at solo gaming like Crysis and **** like that. Single core games. I play SWTOR (MMO) and it uses ALL 8 cores and I will outperform an intel on the highest graphics settings. I get 100-110 FPS max settings. I see tons of people with intels bitching they cant get 60.

There is a difference. Now...start the flames.
 
My MSI 990fxa-GD80 can do quad sli or crossfire and has two (2) x16 PCI-e slots. I know because I run 2 GTX 480's SLI. I know of NO INTEL board that does dual x16s. If I'm wrong, gimme an example so I can look it up.

Dual x16/x16 doesn't matter at all. Current video cards are a little higher than the bandwidth of PCIe 3.0 x4 = PCIe 2.0 x8 = PCIe 1.0 x16. Running on a 2.0 x8 slot might net you a few percentage points in bottleneck, there's NO bottleneck whatsoever running PCIe 3.0 x8.

And all LGA2011 boards can do x16/x16 to my knowledge. Not the same price range, no.

I would like to say that GAMER means different things to different people. INTEL is better at solo gaming like Crysis and **** like that. Single core games. I play SWTOR (MMO) and it uses ALL 8 cores and I will outperform an intel on the highest graphics settings. I get 100-110 FPS max settings. I see tons of people with intels bitching they cant get 60.

There is a difference. Now...start the flames.

http://www.tomshardware.com/reviews/star-wars-gaming-tests-review,3087-8.html
 
On OC'd i5 2500 vs dual core Phenoms? come on. that information is older than I am. not only that, when this was done you paid more than twice for the i5. And I admit, the 2500k is probably the best performance/price chip intel has made. But the X6 comes pretty close. not bad for an antique chip.
 
Well as far as I know a FX-8150 doesn't compare that badly against a 2500k, and the FX-8150 has 8 threads as opposed to 4. I know the 8150 does a bit worse in single threaded applications, while better in multithreaded applications. So theoretically it would be more future proof.

But the thing is I've seen the 8150 on sale for 180$ or less while the 2500k is still 200-209$ everywhere I look, and the Ivy Bridge version is even more. I think AMD is close, I'm not sure if I would recommend them just yet to a friend. But with the improvements they are bringing with the next iteration (piledriver) I think that they just might be heading into a great position.

Obviously they have a long way to go to match Intel's high end i5's and i7's, but we really don't need all that much CPU power these days anyway. Heck I'm still seeing people making AMD builds with Phenom II CPU's.
 
On OC'd i5 2500 vs dual core Phenoms? come on. that information is older than I am. not only that, when this was done you paid more than twice for the i5. And I admit, the 2500k is probably the best performance/price chip intel has made. But the X6 comes pretty close. not bad for an antique chip.

I just tend not to believe anecdotal evidence vs tests with the rest of the system kept as equal as possible. That was the most recent lengthy test on SWTOR I've seen. If you have other data, please provide it.

Well as far as I know a FX-8150 doesn't compare that badly against a 2500k, and the FX-8150 has 8 threads as opposed to 4. I know the 8150 does a bit worse in single threaded applications, while better in multithreaded applications. So theoretically it would be more future proof.

But the thing is I've seen the 8150 on sale for 180$ or less while the 2500k is still 200-209$ everywhere I look, and the Ivy Bridge version is even more. I think AMD is close, I'm not sure if I would recommend them just yet to a friend. But with the improvements they are bringing with the next iteration (piledriver) I think that they just might be heading into a great position.

Bulldozer was a step backwards in terms of IPC (single threaded at equal clock speed), but made that back up with clock speed differences. It's still slightly behind/equal that of the Phenom II X4/X6, still quite a ways behind Intel.

But that was their move, exactly what you said. They went with this new CPU design because, even if it was disappointing now, it has a LOT of potential to be improved on in the future. AMD was looking way past the current generation when they decided to make Bulldozer.
 
@ Knufire

SB is a faster chip in many (but not all) ways than an x6. its a much more expensive chip. so what :)

I would not call 60 / 74 FPS vs 60 / 79 FPS as your own link shows "ways ahead"

If you have a top end GPU and have single or low threaded games that is very CPU dependant then something like a 2500k / 3570K is the chip for you.

If however you play multi threaded FPS games, or less CPU demanding games on any GPU it make very little difference which one you have, the same goes for Bulldozer.

Where the SB there really comes into play is a fairly specific pigeon hole.

With utmost respect i think the advantages SB vs Thuban / Bulldozer (while existing) are over stated. :)
 
Gaming-wise, no, there isn't a huge difference, since games aren't particularly CPU reliant. My comments were more in terms of raw computational power than of gaming performance. A OCed Bulldozer isn't going to perform significantly worse than a OCed Sandy in games.

Phenom II X6s would be GREAT chips for the money if it was easier to get a hold of BE ones. :bang head
 
I know where your coming from, but to me its not that simple :)

on computational power, x264FHD Thuban @ 4Ghz beats a 2500K @ 4.5Ghz. FX-8150 @ 4Ghz beats the Thuban.

Fully threaded integer apps Thuban and Bulldozer beat the 2500K, and its not that far behind in fully threaded FP apps.

The problem actually is almost nothing is that threaded, yet. when i say yet i'm not suggesting its just around the corner, Bulldozer is faster than Thuban and SB if its given a chance to actually use all of its integer cores for the one job at hand, Bulldozer nor Thuban ever get that chance.

The problem with AMD is they chose to divide the power of there CPU's up into far to many cores, where as Intel have got it right for this present time in concentrating it into just a few cores.
 
Everything I said was in reference to single threaded/IPC performance, sorry if that was misleading. A full computational load that uses more than 4 cores, the X6/FX-8xxx will beat an i5, hands down. It's like you said though, few common tasks are heavily multithreaded. That was part of the new direction they went in with Bulldozer, more looking into the future than current performance.
 
Everything I said was in reference to single threaded/IPC performance, sorry if that was misleading. A full computational load that uses more than 4 cores, the X6/FX-8xxx will beat an i5, hands down. It's like you said though, few common tasks are heavily multithreaded. That was part of the new direction they went in with Bulldozer, more looking into the future than current performance.

Sorry. :) i think its the title of the thread that i'm more responsive to. which i think, actually, is a very valid question. Because i think AMD are going in the wrong direction.
 
Back