• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

AMD vs Intel

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
AMD were running in supercomputers not because their performance per core was higher or total computation pperformance per CPU was higher but because it was possible to put more cores ( CPU sockets ) on a single board. Simply more cores on the same space were giving that advantage in calculations. Not to mention much lower cost per core. Other thing is that earlier AMD had lower delay on data transfer between cores so they're actually good for many small operations at the same time.

Before Core 2 generation, Intel was better choice as it was more reliable. After it was better choice as it was offering higher performance in most operations. Saying about reliability I mean quality of motherboards, availability of spare parts etc. which was always lower for AMD.

Since I'm working in IT ( 12 years or something ) I have never seen that AMD had more than ~15% market shares on EU market in business products. AMD was always related to home entertainment and people actually see it like that all the time.
Right now I don't think anyone will choose AMD over Intel for work. AMD business department is like dead. The same in servers and laptops.
Personally I don't know single distributor in central EU which is offering AMD based servers from stock and I work in distribution for couple of years. In Poland I haven't seen AMD based server for like 6-7 years. Simply people don't trust AMD to buy them for business, not to mention lower performance etc. comparing to Intel.
The biggest server manufacturers don't have anything based on AMD in mass production. Couple of years ago IBM was keeping 1 ... literally 1 AMD server series because of some single operations which were running faster on AMD and that was when first Phenoms were on the market.

Yes very true, the core count was what drove AMD server side for a long time. Core count can help in a lot of ways to process multi-core programs that are integer based.



I run cfd programs, why can't any of my software use the hyper threads on my intel cpu's but can use all 8 cores of fx?
how the hell do they fit over a billion tranzisters on a chip?? that's a billion somethings, I can hold in my hand!!!!
folding at home uses our gpu's to compute stuff, I just bought some very high dollar software that will be running on quadro 6000m's, will this type of software we comeing to the home pc or is the cost just to much?
holly mother of god, I could fly with creflo dollar for the cost of this stuff over the next ten years!!!!

So many questions! :D

CFD Programs: These programs rely on floating point operations to solve huge differential equations in several different integrals. A couple of my mech-e friends took fluids, their stories gave me night terrors. Whether your AMD or Intel, you have 1 FP unit per a core. Hyper-threaded CPUs does not mean there is another set of hardware to run instructions on. Rather, its a clever way of sneaking instructions between instructions already setup in a que. Think of it like Disney World's Fast Pass system. Someone in a completely different part of the park is waiting in line with you, but you don't know it. Right before you get on the ride, they jump in front of you and enjoy the ride with a brighter smile on their face than you. Pretty much the same is happening. The Intel's CPUs think very hard on how they place instructions in the que, and when an instruction should be accelerated.

Back to the CFD problem, the reason why Hyper-threading does not work is because the CPU cannot find a faster way to already process the instructions its fetching. Also, if an instruction did use HT, it would end up using the same core anyway. You are most likely creating overhead on your system when you have Hyper-threading on; at least this was true back in P4 to Nehalem days.

Why does AMD 8core get utilized but not Intel? I already went over Intel, AMD is a bit different. Since their cores are actually two logical processors they can execute more instructions. AMD's Bulldozer architecture is unique. Instead of cores AMD calls them Modules. Two cores share a front end to assign instructions to the two integer cores. These two cores also share a FP core. Thus a Module has one front end, two integer cores, and one FP core. Your CFD calculations will also require simple instructions to move data around the system. Integer cores will handle this, while the FP core is executing instructions. This does not mean that AMD CPUs are more efficient than Intel. Although AMD theoretically can execute more instructions, the quality of performance is less than that of Intel's cores.

More on AMD vs Intel in previous posts:
AMD Modules vs Intel Hypter-threading

Transistor Count: I'm going to answer this in another post.

CFD Software: I'm not familiar with these types of programs. I kinda made sure I never got near them because the math scares me. Math based programming is not my strong suit. Now speaking with experience with industry standard tools, most of them will never get to the point for consumer grade computers to be ran on efficiently.

Those Quadro's will do some work with your programs. GPUs compute FP efficiently due to their large number of cores.

That software costs a lot because very smart mathematicians and programmers figure out the best algorithms to efficiently display those results.
 
AH NVM, I just did some quick google to find some interesting facts of my question.

I had thought this to be a good read.
http://www.xbitlabs.com/news/cpu/di...x_AMD_Engineer_Explains_Bulldozer_Fiasco.html

Found it funny that AMD found 800 million less transistors per cpu.

Would explain why my Llano 32nm technology can produce bigger bang for the count. Talking Phenom II quad still a better cpu over FX quad. The only offering FX-8 core has is actually having extra 4 cores.

Step forward.... Or step backward?

FX module design new.... or based off old non working designs...??

Meh, doesn't matter. Per core Intel has more transistors, utilizes instructions better and has better IMC including floating point.....

I really don't see why we would compare a turnip to a walnut.

So this article is a bit misleading. The fact that AMD went SOC was a very controversial move. Yes it has allowed AMD to easily bring different parts of their industry together, but automated systems can be limited in their abilities. Automated systems are not all bad, some of them are very smart, but computation time costs lots of money. You need supercomputers to do path finding solutions that involve CPU architectures. The number of variables that are required of engineers to overcome are so large and so many that its hard to focus on all of them. The CPU needs to be built; big decisions on which problems should have a higher focus ultimately decides how the CPU is designed.
 
Yes, also why, that is the question that I'm asking.

When it comes to games, the CPU doesn't matter too much. Games are just programs, and the engines they run on define how it will interact with the CPU. Blizzard's World of Warcraft was actually re-designed at one point so that it could handle multiple cores/threads depending on what CPU it is on. Older engines like the original COD engine, do not utilize multi-cores to well and rely on the firmware to handle the threading.

All in all, CPUs do not matter when it comes to games. AMD vs Intel does not matter unless the engine is designed to work with one of the manufactures.
 
When it comes to games, the CPU doesn't matter too much. Games are just programs, and the engines they run on define how it will interact with the CPU. Blizzard's World of Warcraft was actually re-designed at one point so that it could handle multiple cores/threads depending on what CPU it is on. Older engines like the original COD engine, do not utilize multi-cores to well and rely on the firmware to handle the threading.

All in all, CPUs do not matter when it comes to games. AMD vs Intel does not matter unless the engine is designed to work with one of the manufactures.
It really depends honestly. I mean, not sure what exactly was done coding wise, but there are several games that the AMD processor takes a back seat to with no obvious sign of 'designed to work with one of the mfg'. There are genuine differences in the CPUs that coding for one or the other can't make up. Not to mention other variables such as resolution and the title and settings come into play. The lower the res, the more CPU bound you are, particularly with high end GPUs. Also, in most all multi GPU configs, the Intel processor gets more FPS at the same clocks than AMD, and pretty darn consistently.

There is, without a doubt, a difference between the two processors performance wise regardless of coding.

BTW, wingman, there is a 'thanks' button you can use instead of/with posting thanks. :)
 
I agree with the multi-gpu scenario. I believe Intel gets their lead because the PCIe system is on the CPU rather than on a second chip. Although I'm not 100% convinced this is the crux of the problem. Its another set of tests that I would like to do but probably will never get to it.
 
The latency of having it off die, like with the use of PLX chips, doesn't make up that large gap. So that leads me to believe it is not hte crux of the problem.
 
I know these are older cards but.... the GFX score is nearly identical at similar speeds with Intel VS AMD these are all SLI shots

image_id_1189444.jpeg

image_id_968477.jpeg

image_id_1220276.jpeg

This one is my HTPC with the CPU at 4.7 and the GFX is higher than the intel run ??

001 fs.JPG
 
http://www.anandtech.com/show/6934/choosing-a-gaming-cpu-single-multigpu-at-1440p/5

With a single card, it makes little to no difference at 1440p. Its the multi GPUs that Intel shines. But the lower the resolution goes, the more the CPU comes into play and high end single cards then start to show a difference. I am trying to find testing at a lower, non GPU bound, resolution.

EDIT: http://www.tomshardware.com/reviews/crossfire-sli-scaling-bottleneck,3471-11.html
(Pay particular attention to the Single GPU to CPU scaling graph towards the bottom... and the first graph on the last page)
 
Last edited:
I'm not surprised to see these results. I was really curious about the PCIe 3/2 difference.

Its interesting to see that NVIDA doesn't care if its Intel or AMD for CPU. AMD GPUs care though. Must be a particular instruction set that relies more on the CPU than the GPU.
 
The PCIe scaling is more important for multi-gpu configs. Having x16x16x8 @ PCIe 3.0 is much better than X16x8x8 @ PCIe 2.0.

Intel mobo's have more support for 3+GPU configuration, AMD mobos are only good for 1-2 GPUs. As well, AMD mobos are just now coming out with PCIe 3.0, and they are refresh boards. Same Chipset, updated speeds... not engineered for maximum optimization.
 
Very interesting. I wonder what really bottle necks the PCIe bus than. Tessellation load with respect to screen resolution?
 
I wonder what really bottle necks the PCIe bus than.
From what we have seen, it doesn't look like much. Perhaps the story changes at 4K resolutions with all its textures, etc? I have not run across any testing like that though.
 
4K constraint doesn't matter anymore with DX12
 
When DX12 titles come out... ;)

That said, what changes were made to the API to make 4K constraints not matter? Also, what are you defining as constraints?
 
I'll have to read that at home... my office blocks the link. :)

I understood it to use more cores, but textures are textures I had understood. Looks like I have some reading to do.

EDIT: That was a great read dolk.. read it twice in fact to make sure I could wrap my head around it.

I wonder just how much more efficient it will be with the vram using that Explicit GPU and CPU synchronization method.
 
Last edited:
Back