• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FRONTPAGE AMD FX-8150 - Bulldozer - Processor Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

How happy are you with AMD FX-8150 price/performance?


  • Total voters
    205
  • Poll closed .
I work in a Data Center...

Anyway to discuss the context about data centers, I recently completed a study of our own data center environment. This included using ASHRAE for standards. As Im sure most know there is a huge push for the 'green' effort. Because of this (and other factors), temperature thresholds have increased over the past few years, even since that 2008 publication I linked (there is 2011 one available). With that said, Aphterthawt is right in that some data centers are beginning to use ambient outside air in order to curb cooling costs. You do not have to move to Iceland either, but the cooler the environment, the less you have to spend on HVAC. And certainly, like any other intake to a data center, this air needs filtered and evacuated.

Its going to cost more money to keep an 85C processor at 30C than it is leave that bad boy run at 85C... no?
 
All you Data center people read before continuing the argument:

http://realworldtech.com/page.cfm?ArticleID=RWT062011114950

And don't just read the title and come back, read the whole thing.

That is a pretty interesting read. In the data center that we have at the OSUMC I haven't seen anything there that would appear to be watercooled (doesn't mean it doesn't exist, but I don't think they have any applications that require that sort of server to begin with).
 
Our maniframe used to be watercooled. Then they figured out that it was more expensive to run and maintain that watercooling than it was to let the ambient air do its thing.

Oh and what about the LOE (Level of Effort) and cost to water cool 150 physical servers? How the heck do you route tubing through racks? Not to mention the much higher than air risk. Its just not a viable option in today's data centers to use water. But cooler ambient air is getting huge financial backing and in cooler envirnoments, its easily viable to drill a large hole in the wall and pump ambient, filtered air in the system to take the load off of the internal HVAC.
 
Its going to cost more money to keep an 85C processor at 30C than it is leave that bad boy run at 85C... no?

Or have chips that run natively cooler to begin with. I just don't see the argument of having hotter chips in a data center when their are cooler alternatives that run faster. How can BD excel in server work loads when it runs hotter and gets outperformed by cooler chips the begin with?
 
While that's awesome to use arctic air for cooling, heating is heating... the CPUs and other electrical parts are still producing the same exact amount of heat into the planet. It doesn't matter where you have it producing heat. Now, power generated by dams is another thing...
 
While that's awesome to use arctic air for cooling, heating is heating... the CPUs and other electrical parts are still producing the same exact amount of heat into the planet. It doesn't matter where you have it producing heat. Now, power generated by dams is another thing...

That's a discussion for another thread/forum, but I live in an area with lots of dams, and I miss the fish. A dam upstream means less water downstream. I was flooded last spring and one reason was irrigation dams downstream backed the water up. The river I live on(the Silvies River, Oregon) doesn't even get to the lake it used to feed. Nor can I float it because of irrigation gates, fences across it, and such. Also they are putting in 300 windmills on Steens Mountain, an incredibly rich and sensitive desert area, with a huge transmission line running across beautiful and unique landscape. No, the answer is use less power, but with 7 billion people on the planet that expect TVs and computers there is no escape. I feel bad with my multiple 500 watt computer systems running:cry:.

Right now I can feel the heat from my system as I type this. High TDP cpus/gpus suck and we need better. You're right, heat is heat. My next project is low TDP sytem.:-/
 
While that's awesome to use arctic air for cooling, heating is heating... the CPUs and other electrical parts are still producing the same exact amount of heat into the planet. It doesn't matter where you have it producing heat. Now, power generated by dams is another thing...

Yes, but I believe the point is that you then don't have to use so much energy in cooling the things. If you take a server you can cool with ambient air in a cold environment and move it into a warm environment, you are (for sake of simplicity) doubling your energy use because you have the same energy produced by the server, and now you need the same amount of energy to cool the server because of the lack of cool ambient air.

As far as bulldozer goes(back to topic), we actually don't know how well Interlagos is going to perform. Yes, we have an idea from Bulldozer, but to me, it seems where Bulldozer does do well is exactly what a lot of servers need. Plus, if the chip's large power use is in fact seriously affected by an immature process, then having lower frequency chips should help a lot with leakage so then with more cores in a heavily multi-threaded environment, it may turn out to be a good option. I'm not saying it is, I'm just saying I want to see Interlagos in a server environment before I pass judgement.
 
While that's awesome to use arctic air for cooling, heating is heating... the CPUs and other electrical parts are still producing the same exact amount of heat into the planet. It doesn't matter where you have it producing heat. Now, power generated by dams is another thing...
read below
Yes, but I believe the point is that you then don't have to use so much energy in cooling the things. If you take a server you can cool with ambient air in a cold environment and move it into a warm environment, you are (for sake of simplicity) doubling your energy use because you have the same energy produced by the server, and now you need the same amount of energy to cool the server because of the lack of cool ambient air.

As far as bulldozer goes(back to topic), we actually don't know how well Interlagos is going to perform. Yes, we have an idea from Bulldozer, but to me, it seems where Bulldozer does do well is exactly what a lot of servers need. Plus, if the chip's large power use is in fact seriously affected by an immature process, then having lower frequency chips should help a lot with leakage so then with more cores in a heavily multi-threaded environment, it may turn out to be a good option. I'm not saying it is, I'm just saying I want to see Interlagos in a server environment before I pass judgement.
It takes more energy to remove the heat generated by the servers than it does to create the heat in first place since A/C is nowhere near 100% efficient. Back at my prior job, we used to have a "datacenter" of maybe 30 populated racks. It took something like 10 tons of A/C (that's an ~35kW draw) 24/7 to keep it cool. Move to a cold environment and that power bill goes away - that's a large cost saving as well as huge environmental win.
 
read below
It takes more energy to remove the heat generated by the servers than it does to create the heat in first place since A/C is nowhere near 100% efficient. Back at my prior job, we used to have a "datacenter" of maybe 30 populated racks. It took something like 10 tons of A/C (that's an ~35kW draw) 24/7 to keep it cool. Move to a cold environment and that power bill goes away - that's a large cost saving as well as huge environmental win.

But what he's saying is that If you're going to build in the arctic, there's still no reason to use Bulldozer. You could have even lower cooling costs with the intel chips that use less power and put off less heat. So there is absolutely no reason to use Bulldozer as it is now in a datacenter environment.
 
The Intel Xeon TDP is up to 135/150 watts. AMD Opterons range up to 115 watts TDP, slightly less than the Xeon.The Interlagos may have slightly lower TDP than current Bulldozers, according to AMD, allowing more cpus in a smaller footprint.
 
Last edited:
Deleted my post because it showed up as a double for some reason.

Anyways, I have found something interesting. Verified buyers had confirmed the BD to outperform the I7 2600 on many instances, but fails at single threading. My hypothesis to this is they built the Scorpius system, OR that FPS averages were insignificant, and The FPS range was used to compare. Or someone saw that the performance was neck to neck with a I7 2600 after seeing what AMD intended for it to be.

Also, I am not up to par with the terms of B2 or B3. Could someone tell me what that is? I rather not trust wiki.
 
Last edited:
Anyways, I have found something interesting. Verified buyers had confirmed the BD to outperform the I7 2600 on many instances, but fails at single threading. My hypothesis to this is they built the Scorpius system, OR that FPS averages were insignificant, and The FPS range was used to compare. Or someone saw that the performance was neck to neck with a I7 2600 after seeing what AMD intended for it to be.

Scorpius is just marketing lingo, and most of the professional reviews used it anyway.

You said 'verified buyers' which makes me think of newegg reviews, so I went and looked. They are mostly the same excuses you hear all over.. "Wait for Windows 8!" "This will be faster than SB when software catches up!" "Bulldozer is from the future!" and so on. It's a load of rubbish, sorry to say.
 
But what he's saying is that If you're going to build in the arctic, there's still no reason to use Bulldozer. You could have even lower cooling costs with the intel chips that use less power and put off less heat. So there is absolutely no reason to use Bulldozer as it is now in a datacenter environment.
True, but the difference in heat output per server would be minimal. Stability and performance will be the deciding factor - and it looks like bulldozer does perform well when given an appropriate workload.

BTW, I'm trying to get one to play around with at work if they ever come back in stock. It seems like a decent and very cheap way to setup a VM server. However, at home, I'm looking at a 2500k or 2600k for my new rig. I'd really like to go AMD, but their single threaded performance is pretty bad compared to sandy bridge :( (Also, note that my sig is out of date - I'm running a 3.0ghz c2d at the moment)

:edit: Assuming newegg hasn't cancelled the order, we should have an fx-8120 in the office this week!
 
Last edited:
Actually, if u had a quad hex setup times however many servers that's pretty Damn significant.

There is no way I would get BD in a data center...even if I had the right load on it. Just not worth it.
 
Actually, if u had a quad hex setup times however many servers that's pretty Damn significant.

There is no way I would get BD in a data center...even if I had the right load on it. Just not worth it.

The server versions of AMD's CPU's allways end up with slower clocks and less power consumption. Then the HE's come out with even less consumption and heat with the same or close to same clocks.

I have yet to see an opty 6200 available. But I haven't looked for a little while.

They will probably be a good option but we won't know for sure till they are out.
 
Riiight, but if its using the BD architecture, I would imagine its still going to use more than Intel's server chips.
 
Back