• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FRONTPAGE AMD FX-8150 - Bulldozer - Processor Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

How happy are you with AMD FX-8150 price/performance?


  • Total voters
    205
  • Poll closed .
A couple key parts of the Ars article linked above that I agreed with:

For game developers, there's an added wrinkle to using the GPU for computation: games already use the GPU for graphics. Moving workloads away from the CPU just means overtaxing the GPU.

AMD's dreams may come true, but the change won't happen during Bulldozer's life. The software of today benefits from strong single-threaded performance, and it benefits from giving the CPU plenty of floating point resources. The same will be true of the software of tomorrow. Piledriver, Bulldozer's follow-up, will also be too soon for this kind of software. So will Piledriver's 2013 successor, Steamroller, and its 2014 follow-up, Excavator. Innovation and progress in the computer world is fast in some regards, but extremely conservative in others; just look at the number of people still using the decade-old Windows XP. The kind of revolution that AMD is counting on could easily be ten years away, if it happens at all.

Intel's approach—to have fewer, wider cores (and, with HyperThreading, to share entire cores between threads)—will continue to give its processors the lead in most workloads for the foreseeable future. It will continue to be a much better match to the software that actually exists, rather than the software that AMD would like to exist.

That is pretty much what I have been thinking. People keep saying that this chip is made for the future, the future is more going to depend on how much/if developers move away from single-threaded coding and code to scale to multiple cores. It also seems that Piledriver would/could be a much better solution as it should be able to have its own FPU separate to assist in calculations (or to put them off to a discrete card -- although for games that isn't much of an option unless you would have a card akin to a Physx card just for calculations...)

Longer term, AMD has started talking up Bulldozer's first revision, Piledriver. Due next year, AMD projects that Piledriver will be about 10 percent faster than Bulldozer currently is. Piledriver will change some of the execution units to support additional floating point instructions, but is not expected to be a major overhaul of the processor's design. Where the 10 percent gain comes from is unclear (you don't gain 10 percent improvements on existing workloads just from adding support for extra instructions), but fixing some of the obvious problems—slow cache, insufficient execution resources—could be the answer.
 
I honestly don't understand WHY developers have not moved on to the scaling yet. This technology has been out for years, and pretty much been perfected. But only the 3D software and some design suits actually use all threads.

It is not hard to code for honestly, and the compilers will do most of the work for you. I can understand the lack of use for most of today's basic applications will benefit. But heavy work like games, development, computing, ect. I can understand that finding errors could be troublesome if it spreads across the processors, but damnit move on to the future! Technology advances double every year, but software ALWAYS remain the same. Or changes every five or six years.
 
Price per thousand.

On the cpu price per thousand; if I'm reading the price list wrong, let me know. At $245 unit per thousand a price of $275 gives a markup of ~12%; that seems about right. I never saw anything that said the price was MSRP. Point it out to me.

You would think with the cool reception to FX/Bulldozer they would hit the lowest price point they could but they have a specific formula for pricing the cpus. These have to do with consumer welfare(money to spend), competition, quality of product, inventory of previous models, product life, return on investment and so forth. Here's a rundown. A bit much I know but it shows how complicated the pricing of cpus is.

http://econ-www.mit.edu/files/6981
ftp://128.151.238.65/fac/MSONG/papers/rand-song-rev3.pdf
 
On the cpu price per thousand; if I'm reading the price list wrong, let me know. At $245 unit per thousand a price of $275 gives a markup of ~12%; that seems about right. I never saw anything that said the price was MSRP. Point it out to me.

You would think with the cool reception to FX/Bulldozer they would hit the lowest price point they could but they have a specific formula for pricing the cpus. These have to do with consumer welfare(money to spend), competition, quality of product, inventory of previous models, product life, return on investment and so forth. Here's a rundown. A bit much I know but it shows how complicated the pricing of cpus is.

http://econ-www.mit.edu/files/6981
ftp://128.151.238.65/fac/MSONG/papers/rand-song-rev3.pdf

$275 for BD or $179 for a 2500k. Its purely embarrassing.
 
Yeah, the price is high. They(AMD) are talking with microsoft about windows 8 and better performance with FX/BD. Hopefully something will be worked out. At the present time, the cpu is like a time machine, it may work better in the future. Part of the problem is it was designed by machine, not custom made like SandyBridge. The lack of humans shows a lack of art. AMD doesn't have the money to hire a lot of top notch engineers nor did they have the time to put into it. Other cos with more money hire the best engineers, like Google, MS, even Facebook. Those cos even have the best antivirus engineers also, leaving 3rd party cos to hire whats left. Global Foundries had their probs and it left AMD holding the bag(of cpus).

"Now I am become Death, the destroyer of worlds." The Bhagavad Gita, a great story. Oppenheimer quoting Lord Krishna quoting Shiva.
 
One would think that with machines there would be less error in the process and smaller parts to be added with precision. But this needs to be worked out on Global Foundries. Maybe if AMD actually made their own CPUs for a change they would be better???

But I can see why they can't. Their location would raise public concern, and the amount of pollution would definitely rile up the neighbors in Arlington Texas.
 
One would think that with machines there would be less error in the process and smaller parts to be added with precision. But this needs to be worked out on Global Foundries. Maybe if AMD actually made their own CPUs for a change they would be better???

But I can see why they can't. Their location would raise public concern, and the amount of pollution would definitely rile up the neighbors in Arlington Texas.

I am pretty sure AMD owns Global Foundries...

EDIT: as in AMD is Global Foundries parent company.
 
I am pretty sure AMD owns Global Foundries...

EDIT: as in AMD is Global Foundries parent company.

AMD sold Global Foundries
QUOTE
GlobalFoundries Inc. is the world's third largest independent semiconductor foundry, with its headquarters located in Milpitas, California. GlobalFoundries was created by the divestiture of the manufacturing side of AMD on March 2, 2009

LINK:http://en.wikipedia.org/wiki/GlobalFoundries


What Does Divestiture Mean?
The partial or full disposal of an investment or asset through sale, exchange, closure or bankruptcy. Divestiture can be done slowly and systematically over a long period of time, or in large lots over a short time period

Read more: http://www.investopedia.com/terms/d/divestiture.asp#ixzz1bTxkKIS5
 
AMD sold Global Foundries
QUOTE
GlobalFoundries Inc. is the world's third largest independent semiconductor foundry, with its headquarters located in Milpitas, California. GlobalFoundries was created by the divestiture of the manufacturing side of AMD on March 2, 2009

LINK:http://en.wikipedia.org/wiki/GlobalFoundries


What Does Divestiture Mean?
The partial or full disposal of an investment or asset through sale, exchange, closure or bankruptcy. Divestiture can be done slowly and systematically over a long period of time, or in large lots over a short time period

Read more: http://www.investopedia.com/terms/d/divestiture.asp#ixzz1bTxkKIS5

I stand corrected. thanks for the info. :thup:
 
Part of the problem is it was designed by machine, not custom made like SandyBridge.

Be careful. It's a bad idea to start reciting that as if it's proven fact - it's anyone's guess if that interview is real or if the guy in it actually knows what he's talking about. I also have my doubt that Intel didn't use any automated layout tools for Sandy.
 
Be careful. It's a bad idea to start reciting that as if it's proven fact - it's anyone's guess if that interview is real or if the guy in it actually knows what he's talking about. I also have my doubt that Intel didn't use any automated layout tools for Sandy.

Of course Intel did. But you have to agree that at this point Intel builds a cleaner running cpu. I think part of the reason is the engineers. I used to know an engineer at Intel that worked on one of the cpu projects. If he was typical of Intel talent, I can see how they design good chips. Plus Intel is very aggressive and spends (a lot more)money on talent and R&D.

Disclaimer; I'm not an Intel fanboy. My daily driver is AMD.
 
Disclaimer; I'm not an Intel fanboy. My daily driver is AMD.

I always try to favor AMD to keep the underdog in business and b/c I always get good value. I lean towards intel in situations like this, but I already committed to AM3+. I still can't get over how upset I am that I invested into an AM3+ board. I didn't have to leave my 790XT board behind as it was very stable for my needs.

Anyway, I love Windows 7 and don't want to switch, but if at some point in the future BD + 8 becomes a good combo, I may just settle for it.
 
If Piledriver somehow does not give the desired yield, I can see Apple swooping in for the kill and purchasing AMD. Its very embarrassing that AMD has to go to TSMC to fulfill market demand, because its main fab Global Foundries, is lacking in almost every regard. This, I suppose, explains the sacking of a couple of executive VPs a month ago.
 
so is it poor production yield from GlobolFoundaries that's causing the BD performance issues? is it supposed to run in the 4GHz+ range default? I know I've read that it's supposed to run @ 4.2 - 4.7 Ghz range.
 
I understand it is 32nm wafer production that is putting the crimps on BD. AMD has a WSA, wafer supply agreement, with Global Foundries that specifies clean, usable wafers. Global Foundries is ~2 years behind Intel in it's ability to make clean 32nm wafers. Intel is transitioning to 28nm technology and will put AMD even further behind. AMD was buying any 32nm wafers from GF but has since required just clean wafers, hence the bottleneck.
 
I understand it is 32nm wafer production that is putting the crimps on BD. AMD has a WSA, wafer supply agreement, with Global Foundries that specifies clean, usable wafers. Global Foundries is ~2 years behind Intel in it's ability to make clean 32nm wafers. Intel is transitioning to 28nm technology and will put AMD even further behind. AMD was buying any 32nm wafers from GF but has since required just clean wafers, hence the bottleneck.

Intel has transitioned to 22nm.

http://www.tomshardware.com/news/amd-ibm-intel,6175.html

:popcorn:

AMD is already prepping for their 28nm 7-series GPU's.
http://en.wikipedia.org/wiki/Southern_Islands_(GPU_family)
 
Thing to remember, AMD has always been at least one full process shrink behind Intel. Problem is, this time around Intel have Trigate tech also, so I think its fair to say they're at least 1.5 years ahead of AMD on manufacturing once moved across to 22nm.

Heck, far as I know the best AMD can offer is 28nm on TSMC tech and thats only for their GPU's....
 
I hope you guys know that 32nm isn't a walk in the park, and smaller than this is not any easier. We are close to being an atom thick in certain wall structures of the transistor. I can't see how AMD can keep up with these price schemes with their CPUs.
 
Back