• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

A thought

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
That was a year ago with Bulldozer, how long should we hold AMD's past against them? do you think we should ever let them move on?

Intel has 4 integer units and 8 threads, again i don't agree that threads are cores, integer units are cores, threads are 1 part of
the module
I brought that up because that was the last CPU they released that people expected to "rock" mostly due to poor marketing.

Also I never said thread=core. But that's just about what AMD did, 4 modules can run with thubans but they aren't performing like 8 cores. They are performing like a quad with 8 threads.

They should have come up with a different naming scheme or something.
 
Calling them a 8-core was fine, BUT I think they should have made it more clear to the masses that BD wasn't an 8-core in the traditional sense, and that, in the traditional sense, it was a quad core with 4 additional integer units. Instead they turned "8-core" into a huge buzzword for BD, which inflated the hype surrounding the CPU. So, people that use their CPUs for mainly floating point operations were greeted with a CPU that wasn't really an upgrade or better than existing quads, which brought about a huge disappointment for many people hoping to upgrade their CPUs.

I just think being more clear would have made people expect performance around current quad or a little better for FP, instead of people expecting performance close to twice that of current quads in multi-threaded FP loads.

Also, advertising the wrong transistor count (2B vs 1.2B) didn't help the either. That's advertising 66% more transistors than were actually on the CPU, which built hype up as well.

I agree they probably wouldn't have won either way, but maybe the hype wouldn't have been near as high if people were expecting a small upgrade from the complete architecture re-design. Basically, they should have tried to mitigate the hype rather than feed off of the false hype, in my opinion.

Yeah, perhaps just say 8 core 4 module which is what most people can agree on today.

I think the transistor count was more a genuine, if rediculess blunder on AMD's part, its actually somewhat amusing looking back :) DIE size vs process size, the numbers don't add up lol.
 
That's where we disagree i guess, GPU's and APU's are the profitable part of AMD, mainstream CPU's is the disaster. i think that's pretty well known in fact.

On your marketing accusation, AMD are not currently doing anything of the sort. please give examples of this.
Currently, no. I was always referring to the hype up to release day :thup:. The disappointment came from that IMO. Now, all the AMD loyal can do is temper them and put out fires. :(
 
Currently, no. I was always referring to the hype up to release day :thup:. The disappointment came from that IMO. Now, all the AMD loyal can do is temper them and put out fires. :(


Where AMD is 4 steps behind Intel in performance it will always had been a disappointment, if AMD had said,- here is our 8 core chip... only its not really a true 8 core as it only has 4 threads and the overall performance is absolutely abysmal compared with our previous generation, never mind Intel.

Its honest but Somehow i very much doubt that would have made them / it anymore popular, there would still be fires to put out.

This is why i'm suggesting that perhaps AMD bow out of mainstream CPU's for a while, Piledriver will probably be an improvement on Bulldozer, perhaps now only 3 steps behind Intel so its completely pointless unless they are 50% of the price, that will never happen as it costs AMD that much if not more to make it (total costs) and the Intel camp is already talking up Haswell making AMD even more a "no reason to go there"

Bow out, at least until they have something good on the table, if that will ever happen.

Currently its a cancer to the rest of the brand. and i don't want nVidia with the same domination as Intel enjoy today, perhaps they could have used what they invested in CPU's to sort there GPU drivers out!
 
Last edited:
Yeah.. I just dont think they have enough resources to drop BD/Mainstream for a couple/few years without hurting GPU's and APU's though.
 
Yeah.. I just dont think they have enough resources to drop BD/Mainstream for a couple/few years without hurting GPU's and APU's though.

Your assuming they are making a profit from BD, there not, the amount of money invested in it could have been better used in GPU's

Edit, scratch that.... AMD seem to be making an average $100M each quarter http://www.anandtech.com/show/5764/amd-q112-earnings-report-158b-revenue-590m-net-loss profit on CPU's alone, GPU's are also a steady and healthy profit if somewhat less than CPU's, APU's not so good with avalabilerty issues, but that should be sorted on Trinity.

All in all AMD look like they are actually in a fairly decent position on there products.

There losses are not from any of those products, but their stake in GlowFlow, which they have now pulled out of.

It should not be long before AMD are out of the red and back to a healthy overall profit. so ignore this whole thread :p
 
Last edited:
So what you're saying is, they should have more money to work with now.
That, in theory, should lead to better R&D and better production. Hopefully, that would translate into a better final product also.

Edit: Why do you call it Glow Flow? Its Global Foundries.
 
So what you're saying is, they should have more money to work with now.
That, in theory, should lead to better R&D and better production. Hopefully, that would translate into a better final product also.

Edit: Why do you call it Glow Flow? Its Global Foundries.

Yes, i hope so... i think most of us do.

I call it GlowFlow because I'm to lazy to type Gloabal Foundries :p
 
How about GloFo, like the article you linked lol. It is (Glo)bal (Fo)undries, I kept thinking "Who is this Glow Flow company?!"
 
What AMD would be wise to do is plow that 100M right back into R&D.
Screw the profits, attention to profits over R&D is what got them into their current mess in the first place.
Hire some engineers to lay the chips out by hand for instance. You can get a lot of engineers for 400M/year.

Had they plowed their money back into R&D back in the 939 days they would be far, far, far better off now.
 
Correct...it is an assumption. Isnt your stance an assumption as well? Or do you have a link to financials showing explicitly that BD isnt profitable ( I dont think financial statements would show that)? You also have to remember that BD is being sold in the SERVER market as well so eliminating that also eliminates that (meager) source of income as well.

Frakk, you may be right, but understand I have no reason to believe you nor do you I. Regardless if they are losing money on BD the point is, ANY BD sold is making up for the cost of developing and producing them and cutting off that revenue stream would be detrimental to their paltry bottom line as is. Anyway, agree to disagree. :)

EDIT: Here is a line from an Anand article:
In terms of product shipments, Q4 marked the launch of AMD’s Bulldozer architecture. AMD technically began shipping Bulldozer products for revenue in Q3, but Q4 was the first complete quarter. For that reason server and chipset revenue grew by double-digits over Q3, while desktop Bulldozer sales went unmentioned in AMD’s report. Meanwhile compared to Q4 of 2010 AMD’s CPU & chipset revenue was up slightly, with the bulk of the difference due to higher mobile CPU (Brazos and Llano) and chipset sales. Unfortunately for AMD this didn’t do anything to help their ASP for the quarter, and a result it’s flat versus 2010.

Read more at http://www.anandtech.com/show/5465/...-q4-657b-revenue-for-2011#sFbYW1eB2fplfDGP.99

Just going to be tough to specifically point fingers at BD and the mainstream is all.
 
Last edited:
@ ATMINSIDE, Fair point, GloFo it is then.

@ Bobnova, yes, whats more i think that is a must. Investment in the future requires much better performance than they have now, investment costs money, they should use every penny of it.

@ EarthDog, an assumption on my part yes. But i was pretty sure i had read somewhere AMD making losses on desktop products due to development costs outweighing revenue.
 
For CPUs before BD, a typical "core" is made up of and integer unit and a floating point unit. BD has 8 integer units and 4 floating point units, so it's like it has 4 typical "cores" with 4 extra integer units. That's why it performs like a quad rather than what the 8-core hype made many people believe before the release.

This is not entirely true, it really does come down to a matter of perspective. A 3 module, 6-core BD can execute 6 128-bit floating point operations, just like a 6-core Thuban. If this was not the case, Trinity would not be able to be anywhere near parity with Llano, core-for-core in flops, even with synthetics: http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-trinity-apu,3241.html

To explain, each module has a 256-bit FPU shared between two cores. However, if no 256-bit operation is requested, that 256-bit FPU can be split into two 128-bit FPU's, one dedicated to each core. So you have up to 8 cores with 4 FPU's but the possibility of 8 FPU operations in parallel (actually more as each 128-bit FPU can execute more than one flop at a time but that's another discussion).

AMD's perspective is that you have 4 modules with 2 128-bit FPU's each, that can be used in conjuction for a single 256-bit call or 4, 256-bit operations. So AMD sees it as 8 cores, the same as Thuban, with an added option of working like 4, 256-bit cores. It's a matter of perspective.

http://blogs.amd.com/work/2010/10/25/the-new-flex-fp/

Where it gets complicated is that because of the shared resources (not just the FPU but pretty much the entire front-end) you don't get the full performance from each core. BD has a wide pipe, but can't pump enough water to fill it. AMD's goal was something like 85% performance with shared resources per core, they did not hit this with BD. They are working on getting the shared resource performance increased and I'm sure also working with MS and linux distros to better use the new architecture.

The advantage of this design is that it saves you a lot of die space for how much performance you must trade-off, assuming you execute well. This allows for a much higher core count without too much sacrifice in lightly threaded scenarios. The other advantage is that this direction could eventually lead to a real "fusion" of cpu and iGPU where the GPU takes over a lot of your floating point operations and would be much faster than a CPU trying to execute them. Whether or not this comes about, or more importantly, whether or not AMD can execute on all of this is yet to be seen, but it is at least a bold step by AMD to try and predict the future of computing and beat intel to the punch.
 
The world economy is the stage of both Intel and AMD. And AMD got their start as a sub-contractor of Intel (more or less). When the first 486's came out AMD didn't get the contract to continue on etching those parts but maintained rights to the 386 or earlier blueprints (lawsuits were involved). But who wanted those anymore?

Not that anyone has noticed, but there is a world recession, if not depression, going on. Enthusiasts manage to carve out some bucks to keep the factories going on high-end parts but most of the world's consumers Need to have some lower priced tech available. AMD can sell APUs to this segment along with the emerging markets for higher tech on everything from autos to boats, all-in-one computers, web surfing grannies, and some applications that are only now developing such as next level medical equipment (think X-ray machines that image in real time). Intel is in these markets too and may continue to thrash AMD at every turn, but AMD is putting high end CPUs on a back burner it seems as 1) they aren't that good at it at this time and 2) they need to generate profits which their APU and GPU products still do.

Intel is not without their faults. Such high tech products invariably come with some hiccups. And since they have the lead for now, they can most easily spend more time getting those hiccups ironed out before product launches much easier than if they were desperate to get to market firstest with the mostest.

For another example, look at Ford vs. Chevy. Chevy had a much more aggessive muscle car lineup than Ford back in the '60's and early '70's. And they pretty much owned the car enthusiast market, i.e., for the average enthusiast Joe out there. Today Chevy is hanging on with gubmint bailouts while Ford has continued developing decent mainstream cars, has a couple of performance models available, but has profited enough not to need gubmint money. The market changed to Ford's favor for many reasons but mainstream consumer products outsell enthusiast products year-in and year-out for any car or virtually any other manufacturer of any product. Think about how much money you spend on paper for your printer: how much is just plain Jane paper and how much is clay-embedded high precision print paper?

Marketing folks tend to always laud their products to the max. After all, who would actively seek a product that was advertised as "pretty good" or "average"? Thus AMDs BD credibility problem. :blah:
 
This is not entirely true, it really does come down to a matter of perspective. A 3 module, 6-core BD can execute 6 128-bit floating point operations, just like a 6-core Thuban. If this was not the case, Trinity would not be able to be anywhere near parity with Llano, core-for-core in flops, even with synthetics: http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-trinity-apu,3241.html

To explain, each module has a 256-bit FPU shared between two cores. However, if no 256-bit operation is requested, that 256-bit FPU can be split into two 128-bit FPU's, one dedicated to each core. So you have up to 8 cores with 4 FPU's but the possibility of 8 FPU operations in parallel (actually more as each 128-bit FPU can execute more than one flop at a time but that's another discussion).

AMD's perspective is that you have 4 modules with 2 128-bit FPU's each, that can be used in conjuction for a single 256-bit call or 4, 256-bit operations. So AMD sees it as 8 cores, the same as Thuban, with an added option of working like 4, 256-bit cores. It's a matter of perspective.

http://blogs.amd.com/work/2010/10/25/the-new-flex-fp/

Where it gets complicated is that because of the shared resources (not just the FPU but pretty much the entire front-end) you don't get the full performance from each core. BD has a wide pipe, but can't pump enough water to fill it. AMD's goal was something like 85% performance with shared resources per core, they did not hit this with BD. They are working on getting the shared resource performance increased and I'm sure also working with MS and linux distros to better use the new architecture.

The advantage of this design is that it saves you a lot of die space for how much performance you must trade-off, assuming you execute well. This allows for a much higher core count without too much sacrifice in lightly threaded scenarios. The other advantage is that this direction could eventually lead to a real "fusion" of cpu and iGPU where the GPU takes over a lot of your floating point operations and would be much faster than a CPU trying to execute them. Whether or not this comes about, or more importantly, whether or not AMD can execute on all of this is yet to be seen, but it is at least a bold step by AMD to try and predict the future of computing and beat intel to the punch.

Aha.... now Bulldozer is making real sense. Thanks for that very informative post :)

that's actually quite clever, if they can get it to work properly.
 
Aha.... now Bulldozer is making real sense. Thanks for that very informative post :)

that's actually quite clever, if they can get it to work properly.

I have read what "]-[itman" wrote, before but not quite so eloquently.

The working properly is pretty certain not a reality at this tiime. Certainly not for us hangers-on to ATX type motherboards. I can remember the history of Ford when the Edsel came to market. It was a Ford with extras. The public did not want it and it was gone in 2 years.

The AMD consumer public wanted performance and they did not get it with BD. What WE; the run fast group got, was a h0t, lesser performing product that came with an attached pie in the sky...out there in the future, perhaps better performance as the programming is amended to work better with AMD's new archtecture. If it can even happen with better software utilization.

I expect that AMD's new architecture will work better with servers and more industrial style applications. Stop and think what that is really saying. We the old die-hard clockers are not nearly the force that moves the conglomerate machine, that needs a profit in the global market. We are a dying breed and cannot spend enough in our lesser numbers to fully influence what a cpu producer is going to do.
 
So what would you have AMD call them? you would have them call it a 4 core? and isn't it just silly semantics either way? at what point is a core a core and the module around it a module? is the core made up of 1 part? 2 parts? 3 parts? when i think of a core unit i think of 1 part at the heart of everything else, the workhorse, the engine, to use a metaphor.

Why would i for example want buy a 4 core when i have a 6 core, or a 4 core to 3 core... and what would anyone want with a dual core? The FX-8 acting as an 8 core beats my 6 core, so whats wrong with that?

I don't think AMD could have won no mater what they called it, maybe they should add another thread to each integer unit instead of enlarging it, then they would have what everyone can agree on, an 8 core Phenom.

You're confusing "integer core" with "core". Traditionally, and this goes WAY back, a "core" had an "integer unit" AND a "floating point unit". The integer unit was never called a "core" by either Intel OR AMD until now. What AMD did was base the Bulldozer design off of the "Clustered Integer Core" design by DEC from back in '96 in their RISC Alpha 21264 CPU. They did this with the reasoning that the FPU is largely underutilised most of the time during normal use. Now remember this is a completely new "from the ground up" design and the first major redesign since 2003 (which wasn't even as huge of a shift in design as this is). Prior to this the engineers hand designed the blocks and then a program would copy those throughout the design but for still it was hand designed. With BD they went to all an all automated design process which as an ex AMD design engineer has stated creates a 20% larger and 20% slower chip than a handcrafted design. Which is probably why there was a large amount of firing done at AMD last year over BD. But the good news is now they have a design that can be "fine-tuned" by going over it and removing the innefficiencies created during the automated process and improve it along the way. I think that's why AMD is so confident that it will get min. 10-20% improvement with each Gen (BD to PD to SR). They're probably going over sections of the design at a time to improve it as well as adding changes (resonent clock meshing etc). The fact that they created a completely new design CPU AND got within spitting distance of Intels 2600K and traded blows with the 2500K and they did that all in a relatively short amount of time using the automated process is really amazing. So when people "bash" Bulldozer it really shows how little they know about the whole CPU design world. Remember how everyone called the P4 "horrible" yet the same SB CPU's everyone applauds are nothing more than fine tuned P4 architecture with improvements made along the way. BD is really showing how much potential it has yet people are blinded to that by so much "brand loyalty" or because of dissappointment created by over-hyped expectations.

Read this for some good info - http://www.xbitlabs.com/news/cpu/di...x_AMD_Engineer_Explains_Bulldozer_Fiasco.html

Edit - As an added thought, it's CRAZY to me that people are sounding the "Death Toll" for AMD simply because it's new architecture is "slightly" slower than the comparable Intel CPU. Think about that for a moment. Every last one of us has a CPU that is basically overkill for everyday normal tasks whether you have an Intel or AMD CPU. Even in gaming, all you really need is a 4GHz or higher quad core (even dual cores or "modules" do fine in most things) CPU and a good GPU and you're going to be perfectly fine (and I dare anyone to prove me wrong). The rest is just benchmark bragging rights in forums or a way to try and make people who own the competing companies CPU's feel like they're inferior to how smart you were for buying what you did.
 
Last edited:
snipped for length

I'll just comment very briefly on the synthesized design for now. I remember when that article first came out, or at least the comments made by the ex-AMD engineer. There were two problems with his comments. First, if memory serves me from back then, some people did some digging and found out that he left years before BD was ever released. He could have been privy to some of the concept and basic architecture ideas/designs at the time, but anything could have happened after he left. Even if he still was at AMD at the time, if he wasn't directly on the BD team, he most likely didn't have direct access to that info any way.

Second, stemming from the first point, he is now an outsider at AMD, just like the rest of us. He may still have some buddies there, but really, his sources probably wouldn't be much better than what you get at reputable tech sites. With that in mind, he is probably mixing up BD with Bobcat. Both would have been referred to as next gen designs while he was there. Bobcat was even touted by AMD to be largely synthesized. http://techreport.com/discussions.x/17948 . In that article you'll even notice a familiar 20% figure when AMD talks about Bobcat.

Third, the xbitlabs article points to the transistor count of BD as evidence of synthesized design, however, that count was since heavily revised and cut the number of transistors nearly in half. http://www.anandtech.com/show/5176/amd-revises-bulldozer-transistor-count-12b-not-2b . And as you can see from this table, an 8-core BD with 5 mb more L2 cache and 2 mb more L3 cache still comes in smaller (physically) than a 6 core Thuban. Even if you shrunk a Thuban and added two more cores and the added cache of BD, the Thuban would still be bigger.

Fourth, synthesized is the way the industry is going, even intel. To what degree is another matter, but everyone uses it to some degree or another at this point.

Fifth, it is way past my bed time and I shouldn't be posting when this tired. Good night all.
 
snipped for length.

My point is still valid though. Think about what would have happened if all the naysayers back in the P4 days were running things and Intel just "quit" because "they're design sucks" (to paraphrase what was said back then). People wouldn't be enjoying those SB and IB chips that they're running now. Not to mention that with BD there really isn't as HUGE of a difference in performance as some would have everyone believe. BD much like the P4 just needs time for the architecture to mature.
 
@ RGone, sadly yes we are a dyeing breed, and perhaps AMD are thinking more about servers than they are about us. AMD's acquisition of SeaMicro for $334 million is a pretty good clue to there direction.

Just as long as the chips they want to sell to me do what i want them to do i don't actually care about AMD's true calling in this age of cloud computing, nor do i care if they are 20% per core behind Intel on the Desktop.

That 20% is surplus to requirements and i have more cores to play with.

Last night, i was having one of those BF3 rounds where everything was going my way, no hackers and everyone working as a team, i was deep into a round on Wake Island and thoroughly enjoying myself pushing 70 FPS on Ultra minus MSAA easily all round long, and then i was rudely interrupted by a *bing* noise, i hit Escape ALT Return and was reminded that i was re-encoding Iron Sky from Blue Ray to DVD on AVS VE so my Mother could watch it, the bing was it telling me it had finished and it was the only thing noticeable about that workload going on in the background.

AMD under performing? i don't think so :)

I don't care about Super PI, i don't care about Intel's i3 vs some AMD on some selective game running Mobile Phone resolution.... i don't care about any of that stuff anymore.

All i care about is what will the chip do for me?

Bulldozer is to warm and to power hungry, but thats really all thats wrong with it.

If AMD can fix that at the right price they have sold me an 8 core chip.

And i think more people are actually sick and tired of synthetic benchmarks which don't really tell them anything, hence Bulldozer has not burnt AMD.
 
Back