• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

The Intel Problem: CPU Efficiency & Power Consumption

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Would power usage/efficiency shift you from Intel to AMD if you're upgrading or making a new build?


  • Total voters
    13

Kenrou

Member
Joined
Aug 14, 2014
Damn, some of these charts are brutal...

"A lot of you have requested that we run power consumption tests for gaming on CPUs, so we've finally done that! And alongside fulfilling that request, we also wanted to tackle a comment from Greg, our recurring antagonist commenter, who "requested" CPU efficiency testing. These benchmarks look at the efficiency and raw power consumption of Intel vs. AMD CPUs. There's a particular focus on the AMD Ryzen 7 7800X3D and Intel Core i7-14700K and 14900K CPUs, as these are the most recent and directly comparable / best gaming CPUs."


Intel:
IntelAMD01.jpg
AMD ⇾ Hold my beer:
IntelAMD02.jpg
 
My look of shock....



See it? ^^


:rofl:


I like that chart...but it also feels like complaining about fuel costs driving a super/hyper car........
...........that isn't really much faster (or slower) than the competition, LOL! Kudos!
 
Usually I would agree, but in this case the top Intel is "only" 3x more expensive than the top AMD (and in some cases slower), I mean, electric is stupid expensive in the UK since the Brexit (although it's getting slowly better)... That would definitely tip the scales in my case 🤷🏻‍♂️
 
In electric/year, or did I not read the chart right, 73.7 AMD Vs 223.1 Intel?
 
In electric/year, or did I not read the chart right, 73.7 AMD Vs 223.1 Intel?
Oh dear, yes.. I for some reason I drifted to cost of the processor, not to run it.

What's the real cost difference though? How long per day, on average does one game? What's the cost if you game..............2 hours a day? 4? I suppose that's easy enough to figure out. gaming 8 hours a day is................ a streamer's life, lol. Do teens even game that much, lol?

EDIT: @Kenrou - curious, how much is your electricity rate? I'm between .10-15 luckily. If I game 4 hours a day (wow for me), with my 13900K that's ~$26 a year (versus) ~$10. Nearly 3x, but $16 over the course of a year...(~$150 is a difference for those full-time gamers).
 
Last edited:
Nah, teens are on social media 8h+ instead of gaming nowadays, why they're getting dumber and with more mental issues by the year... Especially with tiktok from what I read, there's reports that app actually creates ADD/ADHD, why my youngest is gonna be kept well away from this type of apps at least until she's 16. You know, parents used to tell us that TV and later gaming was gonna ruin our brains, they had no idea what was coming...

As to my rates not really sure, I know I spend (electric) ~£40/~12 days for a house of 5, w/3 kids permanently connected to their phones/tablets, 1 laptop and 2 undervolted PCs. Before Brexit used to be more like ~£30/~14 days.
 
Heh, it's like running gaming tests at 720p................ feels dramatic just to prove a point, lol.
 
When I look at a platform, I'm looking at several items like cost to purchase, performance, heat generation, electric usage, and other things I'm sure. (Can't think of everything now and it's not important.)

So the last system I bought, I narrowed down to AMD 5800x and 5700x. Yes the 5800x is faster but it cost a few dollars more (not much and not enough by itself to make a difference.) The 5800x has a higher TDP and perhaps uses more energy. Note: Even if the CPUs use the same amount of energy, I need more energy to cool a hotter chip so...

I went with the 5700x. Since I run my machine 100% 24/7/365, it makes a difference.

I feel it would be like buying that super car and not looking to the insurance and fuel costs. Maybe not a super car. Maybe its more like finally *just* being able to afford a Toyota Supra only to then find out insurance is more than you can afford. Maybe not *can* but *want* to afford. I don't want to pay for more electric than I need to do what I do.

I too thought that when the 3x cost difference was meant for the purchase of said CPU. The electric costs makes more sense.

Thank you for making this thread. Good info.
 
Even if the CPUs use the same amount of energy, I need more energy to cool a hotter chip so...
If they are using the same amount of energy, all other things equal, wouldn't that need the same amount of power to cool it?

I don't want to pay for more electric than I need to do what I do.
Nobody does. And while I do like the chart (still haven't watched the vid), some perspective is needed. Also, for those who are beating on their CPUs doing productivity/heavy multi-threaded, the differences will be even larger.
 
Did they test Intel CPUs at various power limits? At stock with an enthusiast tier board Intel CPUs typically get run practically without power limit. How much perf do you lose as you set lower power limits? Basically I want to see something like the below. Back then, 8086k could beat the 3600 in raw performance, but at a cost of using more power. At like for like power limits, the 3600 gave higher performance. We have a similar situation with Zen 4 vs Raptor Lake. AMD have a clear node advantage so efficiency is expected to be better putting aside the X3D part of the equation, which helps even more. So in that sense, it would have been interesting to see 7700(X) (~£320) vs 7800X3D (£370) vs 14700k (£380) at various power limits.

ppw-p95.png.2ffffcd103842b89e949591d7c044947.png

Kinda on a tangent, Phoronix have posted some Linux testing on Meteor Lake vs Phoenix. In short, Phoenix takes the lead in CPU where for similar power consumption it gave higher performance. Tables were turned in graphics, where Meteor Lake did more for less power. Really looking forward to seeing some Windows testing in not too distant future.
 
I'm saying that *if* the 5800 and 5700 use the same amount of energy (and I don't know that they do) *and* the 5800 runs hotter (which it does) then I need more energy to cool it. I'm not trying to trip you up on this. I'm jsut saying that all things equal (and they aren't ) a hotter chip does cost more to operate. That difference may or may not be enough to tip the scales for a consumer who is looking.

Now, if I'm living at home and mom and dad are paying the electric bill, I am not looking at this and I would not care. But if I'm paying, then the amount of weight I put on that metric depends on the spread within that metric. If it's a few pennies a week, I don't care. Tens of pennies a day, I might. It all adds up. Convert energy costs to beer and then see how many beers you have to give up to run and/or cool a CPU. ;)
 
hear me out on this one guys, what this actually shows is how bad intel's implementation is of E-cores. if you look at their charts showing fps/watt you see the 12100F and 13400F have higher scores with no E cores. in the other chart where they say its GPU bound 1440P/ultra settings, they only tested 2 cpus. later on in that same section they talk about starfield but both 12100F and 13400F are left out? they said they spent how long again on this? i get there is a lot going on here but if you show a list with cpus, they all need to be there in every chart, period. This could be me but who still justifies buying a video card on avg fps any more? if you remove that from the chart on the starfield 1080P/low section, they are not that far a part in 1% or 0.1%. now they switch to startfield efficiency chart, magically the 12100F and 13400F show back up, wtf? then to blender where they are left out again, yet the list includes cpus not shown in other charts? is there something with the way the hand off of work from E to P cores is causing problems? if E cores are suppose to be so good why does it make them look bad? i would personally trade 4e for 2P or 8E for 4P and so on for the newer intel cpus.

now someone help me out, i have read amd is using a E cores? yet when i look up specs for those cpus in the charts. i did not find anything about them having E cores.

paying people to do all that testing for some cpus to not be shown in some tests. then to have other cpus included that where not used/shown in other charts. i feel like they wasted alot of money OR some of the data is being recycled from old tests. now i know a channel that accused another channel of doing the same thing. i wonder if they are doing the same here? i know im going to piss people off but really look at the charts. look at the number of cpus listed and the models on the list. If i put out work like this, i would be in a storm of hurt or lose my job.
 
Last edited:
Presuming they focused on gaming which is a more complicated and time dependant load than others. The challenge with Intel's E cores is that they're much slower than a P core so if you don't take that into consideration, anything that is time sensitive could get delayed. And by take it into consideration, that can include all of the hardware, OS and game. For simpler or easier to scale workloads like Cinebench, it is more of an easy win from having them.

AMD's equivalent are called c cores, compared to the full fat C cores. I'm not sure if they're in any released consumer product yet, but if not they will be initially in APUs. AMD's approach is different in that C and c cores are essentially the same apart from c cores are made with different layout biased for density trading off clock, and further to that they have reduced L3 cache. Overall effect is they're about half the size of a C core, with implication that average performance is above half so there is a net benefit if you need a lot of cores.
 
Think we spoke about this on another thread, supposedly Intel E cores are not the same as P cores, they're Pentium/Celeron/whatever with lower IPC as well as lower clock/cache, as mackerel said, AMD will be using full P cores but with only lower clock/cache, so SUPPOSEDLY they will be better pound for pound than Intel's. Hopefully they won't have the same scheduler problems...

I still don't see the benefit of E/c cores, anything that could be offloaded to them could have been also offloaded to an P/C core not being fully used or to hyperthreading... Threads... ? Which begs the question, what's the point of HT anymore if we now have E/c cores that do the same function with better performance?
 
That was for the 12***, are the 13*** and 14*** the same?
 
supposedly Intel E cores are not the same as P cores, they're Pentium/Celeron/whatever with lower IPC as well as lower clock/cache, as mackerel said, AMD will be using full P cores but with only lower clock/cache, so SUPPOSEDLY they will be better pound for pound than Intel's. Hopefully they won't have the same scheduler problems...

I still don't see the benefit of E/c cores, anything that could be offloaded to them could have been also offloaded to an P/C core not being fully used or to hyperthreading... Threads... ? Which begs the question, what's the point of HT anymore if we now have E/c cores that do the same function with better performance?
In silicon area terms, Intel can trade one P core with HT for 4 E cores without. In the best case, the 4 E cores should do work more efficiently than one P core. AMD's implementation is slightly different. There you can trade 1 C core with SMT for two c cores with SMT. Again, the two c cores should be more efficient than 1 C core. Sometimes you do need the single thread performance so we're not going to see P or C cores go away any time soon on consumer tier CPUs.

HT/SMT on average give more throughput than the cost of implementation. I have wondered myself if it is time to think about ditching it in the future because it does come with its own problems, in security and performance. As we get to ever higher core counts, the management overhead of software threads could be simplified by not having HT/SMT thrown into the mix too. Performance should be more consistent, especially where mixed software is running at the same time.

Also keep in mind, software can be very varied. What hardware helps one scenario might not in another, and CPU designers have to optimise for what they think will be the required use cases when the CPU is on sale.
 
Back