• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Enthusiasts and apologists: Why AMD?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
The fact that AMD had no problems with their CPU production in the last years while Intel was waiting in the queue, only proves that Intel means as much as AMD nowadays. Who pays first is more important in the current moment. Because of that, Intel had problems to deliver a huge amount of CPUs to servers and laptops, so AMD took over a quite large % of orders. There were 2-3 months when 90% of branded laptops available in distribution were only on AMD.
Without looking for the exact numbers Intel is still the main volume supplier longer term. AMD have targeted certain markets and are certainly making gains in certain areas like server, at the cost of other areas like GPUs where it doesn't feel like they're trying for volume growth at all. There can be some supply chain continuity hitches here and there which may mean short term unavailability, especially when transitioning between generations.

Another thing is that Intel 10nm process was a fail, they were trying it for a year and finally released next 2 gens based on 14nm. Now we have old technology maxed out only to beat AMD while AMD jumped to another smaller process.
It is known that Intel 10nm in general is a big fail on their side, and they're still recovering with the intent of passing TSMC in process leadership in next few years or so. If they can pull that off given ongoing delays with near-future products remains to be seen. It has also impacted on their ability to make designs they have.

The latest "10nm" incarnation in Raptor Lake uses a further enhanced version of that from Alder Lake. If you compare Raptor Lake vs Zen 4, the efficiency gap between them isn't big at all. I posted a performance/power chart somewhere maybe earlier in this thread or other similar. Raptor is behind, but not by that much. The higher peak power reported without sufficient context for clickbait articles only applies to specific workloads.

Intel is slightly faster in games now at the cost of much higher wattage. It will probably change when AMD release 3D cache version of Ryzen 7000 in a month, while Intel has nothing better to show.
Just wish AMD would have released the 3D version sooner as it might have tempted me, since modern higher end CPUs are severely ram limited in performance.

The main problem which I see with Intel is that at least since 2nd Ryzen gen, every new Intel generation has design flaws or isn't tested right before release. They try hard to keep up with AMD processors. It looks like there is a premiere, 3-6 months of BIOS updates, manufacturers start to work on another generation while everything that wasn't fixed is a users' problem. After next 4-6 months we get a new chipset which gives us barely anything new and the story repeats, making from users' beta testers.
Due to changes in my personal circumstances I no longer buy (almost) every gen to play with. The last generations I got were Rocket Lake, and Zen 2. On Intel side there were no major problems. Zen 2 I recall we still had the memory support problems for first few months until that stabilised, and was really great after it matured. Maybe DDR5 mixed things up a bit again but it doesn't feel like the right time for me to bite just yet. Looks like still some problems on both sides there, more so on Zen 4 as it is newer.

Since I downsized in systems I'm back to using Skylake-X as my daily driver. My Rocket Lake system which was used solely for gaming I might sell while it might still be worth anything. If I had to buy a desktop system right now I think I'd lean slightly towards Zen 4, but it is AVX-512 tipping me over from Raptor Lake. 3D version could seal the deal if it isn't too late and we get into "there's a new gen in a few months" situation.
 
Value for money...
Do tell...


...it hasn't been cpus or mobos for this/last gen...didn't gpus need a price cut?


I prefer AMD for their status as the ones who drove the market. Yeah, without AMD (as they are) we would not have the powerhouse systems we do today.
For what felt like forever, AMD didn't do a thing to compete with Intel. Because of this, Intel rested on their laurels for the better part of a decade and improvements gen over gen wasn't much (people whined and complained about it too). At that time is when they were kings of value, but not driving much forward. That bred lazy...negligible improvements from intel for generations. It wasn't until zen2 that performance caught up to intel and pricing was fair. The move to zen 3 then zen 4 and ddr5 brought prices skyrocketing from amd. I mean, I guess by default that's right, as they are the only competition...but it feels like their driving anything was over a decade ago and maybe quite recently since they've finally caught up.
 
Last edited:
That bred lazy...negligible improvements from intel for generations. It wasn't until zen2 that performance caught up to intel and pricing was fair.
There is a difference between Intel not wanting to do anything, and Intel not being able to do anything, the latter being what happened. The stagnation period post-Skylake was due to their now well known 10nm woes. The architecture designs existed but couldn't be made. Tick-tock was working before that. Ryzen's launch did force Intel's hand to offer more cores as the only thing they could do. Intel is still on their recovery process, the next couple of years will be interesting.
 
There is a difference between Intel not wanting to do anything, and Intel not being able to do anything, the latter being what happened. The stagnation period post-Skylake was due to their now well known 10nm woes. The architecture designs existed but couldn't be made. Tick-tock was working before that. Ryzen's launch did force Intel's hand to offer more cores as the only thing they could do. Intel is still on their recovery process, the next couple of years will be interesting.
True, but there was zero pressure to improve. 10nm woes were certainly a part of it, but they literally had no reason to push without a true performance competitor. How much more they could or couldn't, nobody really knows, but there was certainly no reason to.

In other words, they weren't driving anything for the better part of a decade and only at that time were they the value leader. They haven't been the value leader for at least two generations now, but I can see Ryzen, particularly Zen2 and forward, pushing Intel.

The core wars are a thing I've despised AMD for, honestly. Most people STILL can't use the cores/thread count on the CPUs these days. They couldn't beat them performance-wise, so they just added more cores and threads.
 
Last edited:
There was a pressure since Ryzen 2000, but Intel simply couldn't do it and was releasing "refreshed" CPUs, pushing architecture to the "desktop TDP limits" (at least on ambient). It still looks this way, but at least they bumped CPU frequency and improved IPC. The new tactics vs AMD are E-cores, as without it, they couldn't compete in multithreading at, let's say, reasonable TDP.


Random AMD result - Blender on Steam Deck :D Looks like AMD is winning in gaming consoles and release newer custom chips.

SDbl.jpg
 
Do tell...


...it hasn't been cpus or mobos for this/last gen...didn't gpus need a price cut?



For what felt like forever, AMD didn't do a thing to compete with Intel. Because of this, Intel rested on their laurels for the better part of a decade and improvements gen over gen wasn't much (people whined and complained about it too). At that time is when they were kings of value, but not driving much forward. That bred lazy...negligible improvements from intel for generations. It wasn't until zen2 that performance caught up to intel and pricing was fair. The move to zen 3 then zen 4 and ddr5 brought prices skyrocketing from amd. I mean, I guess by default that's right, as they are the only competition...but it feels like their driving anything was over a decade ago and maybe quite recently since they've finally caught up.


Bought a MSI 6900XT Gaming X Trio overclocked to 6950XT Gaming X Trio settings via the Radeon software; outgunning a RTX4070. As Sam Tramiel would say ‘Power without the price’, that’s AMD for you…As for the rest of them they might as well pack their bags and go home, as they say ‘jog on’…
 
True, but there was zero pressure to improve. 10nm woes were certainly a part of it, but they literally had no reason to push without a true performance competitor. How much more they could or couldn't, nobody really knows, but there was certainly no reason to.
We saw glimpses of what could have been. Ice Lake had the first post-Skylake microarchitecture and it gave a hefty IPC boost, but couldn't hit decent clocks from the still broken process. It would have gone against Zen 2 nicely, since Zen 2 was only just beating Skylake at that point. But with the broken process we only had a limited release on some mobile products and no clock to try for the top end.

In other words, they weren't driving anything for the better part of a decade and only at that time were they the value leader. They haven't been the value leader for at least two generations now, but I can see Ryzen, particularly Zen2 and forward, pushing Intel.
Which decade is this? Intel's old tick-tock model was created in response to a beating AMD gave them previously, I think it was around the Core (without-i) era or maybe earlier. This worked fine up to Haswell but Broadwell was the first signs of troubles ahead. Through the early i-series we saw significant performance boosts from AVX in Sandy Bridge and AVX2 (FMA) in Haswell on top of general IPC increases. I see Intel's biggest problem period as between 2016-ish to 2019-ish, with the release of Alder Lake.

I find it curious you choose "value leader" here. How do you define that? Do you need or want to be value leader e.g. Apple? IMO with current Zen 4 vs Raptor Lake there isn't much price/performance difference between them. It depends on where you look in the stack and what your workload is, one may be better than the other, but in a general sense you wont go far wrong with either.

There was a pressure since Ryzen 2000, but Intel simply couldn't do it and was releasing "refreshed" CPUs, pushing architecture to the "desktop TDP limits" (at least on ambient). It still looks this way, but at least they bumped CPU frequency and improved IPC. The new tactics vs AMD are E-cores, as without it, they couldn't compete in multithreading at, let's say, reasonable TDP.
I'm not sure power efficiency is going to be Intel's strong point until we get consumer CPUs moved onto more advanced processes. Still it was AMD that moved to aggressive limits first with Zen+, that drove Intel to follow.
 

Bought a MSI 6900XT Gaming X Trio overclocked to 6950XT Gaming X Trio settings via the Radeon software; outgunning a RTX4070. As Sam Tramiel would say ‘Power without the price’, that’s AMD for you…As for the rest of them they might as well pack their bags and go home, as they say ‘jog on’…

Looking at the results, RX6900XT is a bit faster, costs almost as much (after all the price drops), uses 100W more, and is a much larger card in general. I guess both options have their advantages, but it's weird to compare the last top series with the current mid-shelf.


I'm not sure power efficiency is going to be Intel's strong point until we get consumer CPUs moved onto more advanced processes. Still it was AMD that moved to aggressive limits first with Zen+, that drove Intel to follow.

I would say that every AMD move caused Intel to push its products to the limits. The same was with shorter test periods, multiple failed chipsets, and some more. Lack of time to prepare products as they had to release anything that could compete with AMD.
Intel couldn't move to higher TDP back then, or their CPUs wouldn't run on popular air coolers. 8000 series was the last at still reasonable max temps. 9900K was overheating under full load on popular coolers, even though in specs it was 95W TDP. Back then, Intel started recommending AIO coolers. By aggressive limits, I guess you mean Threadrippers, as regular Ryzens had 65-105W TDP.
 
I find it curious you choose "value leader" here. How do you define that? Do you need or want to be value leader e.g. Apple? IMO with current Zen 4 vs Raptor Lake there isn't much price/performance difference between them. It depends on where you look in the stack and what your workload is, one may be better than the other, but in a general sense you wont go far wrong with either.
Absolutely. I was replying to Haider who said (generically) "value for the money". I just don't think that's been there for a couple (three?) of generations now. There are exceptions in the product stack as you said, but generally, Few people think of AMD as value for your money these days (for a few years now).

As I see now, he's talking about video cards in a thread in the CPU section, sooooo........there's that. He isn't wrong, but feels a bit straw man, lol. :)

Bought a MSI 6900XT Gaming X Trio overclocked to 6950XT Gaming X Trio settings via the Radeon software; outgunning a RTX4070. As Sam Tramiel would say ‘Power without the price’, that’s AMD for you…As for the rest of them they might as well pack their bags and go home, as they say ‘jog on’…
You're correct on the GPU side, but this thread is in the CPU section and has been about processors since it started... which is where my head was asking/replying to your generic statement.

Comparing last gen to current gen feels weird - even though there is value with AMD comparing like to like 7000 vs 4000 series. The problem with AMD graphics cards this gen is the power use is a lot higher. If you compare like performing cards (4080 and 7900XTX), it's a ~20% difference in gaming and RT not to mention increased use at idle and video playback. But, that wouldn't stop me from buying their cards...it will stop others, however, and is a point of complaint from many.
 
Last edited:
Intel couldn't move to higher TDP back then, or their CPUs wouldn't run on popular air coolers. 8000 series was the last at still reasonable max temps. 9900K was overheating under full load on popular coolers, even though in specs it was 95W TDP. Back then, Intel started recommending AIO coolers. By aggressive limits, I guess you mean Threadrippers, as regular Ryzens had 65-105W TDP.
Before Ryzen, Intel typically didn't run their CPUs very close to the silicon limit. Then, routine overclocking had some headroom to work with. OG Ryzen had a rather poor multi-core boost table, but starting Zen+ AMD got their boost working more aggressively when you use one or all cores. We saw Intel respond by using more silicon headroom as standard on their offerings to try and keep up in absolute performance.

TDP is not power consumption, and it doesn't matter. AMD CPUs don't run at TDP either. Typical PPT limit at 35% above PPT. Intel gives more freedom by allowing practically unlimited, although system builder is expected to limit appropriately for the cooling solution applied.
 
Absolutely. I was replying to Haider who said (generically) "value for the money". I just don't think that's been there for a couple (three?) of generations now. There are exceptions in the product stack as you said, but generally, Few people think of AMD as value for your money these days (for a few years now).

As I see now, he's talking about video cards in a thread in the CPU section, sooooo........there's that. He isn't wrong, but feels a bit straw man, lol. :)


You're correct on the GPU side, but this thread is in the CPU section and has been about processors since it started... which is where my head was asking/replying to your generic statement.

Comparing last gen to current gen feels weird - even though there is value with AMD comparing like to like 7000 vs 4000 series. The problem with AMD graphics cards this gen is the power use is a lot higher. If you compare like performing cards (4080 and 7900XTX), it's a ~20% difference in gaming and RT not to mention increased use at idle and video playback. But, that wouldn't stop me from buying their cards...it will stop others, however, and is a point of complaint from many.
On the CPU side they come up with AMD64 the competing technology from Intel is called EPIC/Itanium (IA-64). IA-64 cost more and was less performant than AMD64 technology. Intel IA-64 did so bad in the market place, the Goliath Intel HAD TO adopt little David AMD's value for money technology AMD's 64 otherwise they would have had to exit the Windows CPU space; there you go what did I say 'Power without the price'; now get beers in lad:)
 
Do tell...


...it hasn't been cpus or mobos for this/last gen...didn't gpus need a price cut?



For what felt like forever, AMD didn't do a thing to compete with Intel. Because of this, Intel rested on their laurels for the better part of a decade and improvements gen over gen wasn't much (people whined and complained about it too). At that time is when they were kings of value, but not driving much forward. That bred lazy...negligible improvements from intel for generations. It wasn't until zen2 that performance caught up to intel and pricing was fair. The move to zen 3 then zen 4 and ddr5 brought prices skyrocketing from amd. I mean, I guess by default that's right, as they are the only competition...but it feels like their driving anything was over a decade ago and maybe quite recently since they've finally caught up.
Well pricing will be based on many things, as we all know. I remember the K6 era and the Cytrix 5x86. Those were no real competition but when it was them against Intel and you were building a system. For me it was AMD all the way.

Though the flow is back and forth and has been for the laste couple of decades or so, without AMD we would've looking at those nasty CPU prices as we did when the 486 was king I skipped the 486. My old 386 was smoking those things at first. 4 meg's of memory, windows accelerator and a Math CO PRO.
 
One current feature on the consumer level AMD chip is AVX-512 support specifically the AI instructions are supported. Intel have stopped supporting this on their new gen chips. I was looking at the 7800X3D around Christmas time for this feature. With an AIO cooler it's good use case if you don't want to buy a Titan/3090/4090
 
AVX-512 is barely supported by software, and in home/office environment no one really needs it. Disabling it in desktop CPUs, probably gives users a point in workstation/server chips which were often replaced by desktop CPUs in the last few years. I'm not really sure what was Intel thinking. For sure AVX-512 was pretty much pointless in desktop CPUs since release. There will be still <1% of users who will use it for something.
I'm not really sure what 7800X3D CPU has to do with high series graphics cards. X3D CPUs are mainly good for competitive online gaming, where you use lower display resolution and go for max FPS. Since CPU is not affecting FPS so much at higher display resolution and bumped up details, then it doesn't really matter if you use X3D, Intel CPU or non-3D cache AMD, as you will be much more limited by the graphics card's performance than the CPU's.
 
Anything new will take adoption time. It needs to be out there and needs to build up support. In theory any software that can make good use of AVX already may see benefit from supporting AVX-512 also. For the record, Cinebench is not one of them. I can't remember the exact IPC difference but I ran it on normal and "Pentium" class Skylake, the latter missing AVX functionality. Didn't change Cinebench R20 results much. That's only considering traditional FP operations. The "AI" extension could see more usage, especially in systems that don't have a dGPU.

I do find it amusing that AMD might be doing more for AVX-512 than Intel right now, at least in the consumer space. Intel only released one desktop CPU family supporting it: Rocket Lake. If you count HEDT that adds Skylake-X and Cascade Lake-X. Mobile side was Ice Lake and Tiger Lake. Zen 4 over time would probably pass that in installed base. I hope Intel can find a way to return AVX-512 sooner than later, either by adding emulated support on E cores, or just adding full support to E cores. Support doesn't mean it needs to be high performance. Just whatever is enough to allow it to be enabled in P cores.
 
Does their server line (intel) support AVX-512? Seems like that would be the natural home for such an instruction set.
 
For me weird is that all Alder Lake and Rocket Lake CPUs support AVX-512, but depends on the revision, it's software or hardware locked. If Intel wanted, they could enable it on all current i series CPUs.
Workstation and server CPUs support AVX-512. So officially the lowest CPU is Xeon-W.
 
Before AMD launched Zen Intel had convinced the world that 4 cores was enough. For better or worse when Zen was launched with 8 cores that changed.
 
Back