• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

There's a new world record kids. 8.8GHz on Raptor Lake

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I finished a graphics course a while ago, and I have been working with graphic designers for over 4 years. Every single one of them needed a PC because on Mac, something was missing. Guys who were bringing Macs with them needed a PC to finish some projects. Guys working professionally with graphics (TV commercials, 3D games, animation, and other stuff like that) were all using PCs because Macs couldn't cover all their needs. This is just my experience, but it highly depends on the project. If someone is doing only one specific task on a single program then it doesn't matter. Worse is when there are multiple programs involved.
Most programs can be replaced, but most users have their specific environment and don't wan t to learn everything from the beginning.
Another thing is the price. Macs and specific software cost much more than a similar environment for a PC.If a graphic studio has multiple computers then it's already a problem (unless has a huge budget).
This is my experience and my point of view that doesn't have to be the same for every other person.

Yeah but your explanation of your experience... doesn't make any sense. :)

Every single one of them needed a PC because on Mac... something was missing.

WHAT was missing? What could that "something" have possibly been? Even in the limited scope of your experience... I can't even fathom what a graphic designer could possibly need that somehow exists on PC... but not on Mac.
Mind you... this is coming from a PC guy. (See sig...)

Guys working professionally with graphics (TV commercials, 3D games, animation, and other stuff like that) were all using PCs because Macs couldn't cover all their needs.

Again... WHAT "needs"? I mean I work, professionally, with graphics, TV commercials, 3D games, animation... etc... and all of that are the reasons that I bought a Mac in the first place. It certainly wasn't because I was a fan of Apple as a company...

But for exactly those things... Macs have been king from the very beginning.

Another thing is the price. Macs and specific software cost much more than a similar environment for a PC.

In 2022 (actually going back to 2020)... this is no longer the case. My M1 Macbook Air... with only 8GB of RAM mind you... completely laid to waste my 32GB PC Desktop editing machine. And the Air only cost me 949 bucks.

To get equivalent performance in 2020 I would've had to have shelled out an UNGODLY amount of money on a PC.

Getting back to the subject at hand... the 8ghz world record and all that... A Mac Studio M1 or M2 max would probably run you about $1600.

The video card on the PC alone could EASILY cost you that.
 
Last edited:
Maybe I said it in the wrong way. What I saw in a couple of years working with graphics designers, is that for each larger project, they typically needed 3 applications, from which 2 could be shared on PC and Mac, and for 1 they needed a PC. Some graphic studios actually moved from older Macs to PCs because of that. However it was between ~2015 and ~2020, so the hardware was a bit different. It was shared by people I was working with for a couple of years, but I wasn't doing that professionally. I won't tell you what exactly they were using.
Until M1 chip release, Macs were for long years with about the same architecture as a regular PC, but worse specs. The difference was in software and the price.
I'm not saying that Macs are bad or that you can't make everything on them. It just depends on what software you need or you are used to work with, and actually want to use.
 
Last edited:
Maybe I said it in the wrong way. What I saw in a couple of years working with graphics designers, is that for each larger project, they typically needed 3 applications, from which 2 could be shared on PC and Mac, and for 1 they needed a PC. Some graphic studios actually moved from older Macs to PCs because of that. However it was between ~2015 and ~2020, so the hardware was a bit different. It was shared by people I was working with for a couple of years, but I wasn't doing that professionally. I won't tell you what exactly they were using.
Until M1 chip release, Macs were for long years with about the same architecture as a regular PC, but worse specs. The difference was in software and the price.
I'm not saying that Macs are bad or that you can't make everything on them. It just depends on what software you need or you are used to work with, and actually want to use.

Well now THAT makes more sense. :)

What you said about Mac prices and whatnot also fits better into the pre-2020 world. But we're a handful of weeks away from 2023 now and technology moves EXTREMELY fast. You should give the second video that I posted (the Collider one) a watch. Apple has flipped the script. You practically need an 8ghz x86 chip and a 3090 or something to keep up with the fastest M1 and M2 chips for exactly the tasks we're talking about. And since they've eliminated the need for a separate graphics card... there's $1000-$1800 off the price right there.

I can't wait until there are some video editing bench results out comparing performance in Adobe Premiere and Davinci Resolve. Then we can have a clearer picture of where we stand in terms of price/performance in 2022 and beyond.

Also... Most of the stuff from this thread has just dealt with just getting the CPUs up to 8ghz. I would love to see how the new Intel chips actually perform against AMD (and Apple).

Do we have those numbers yet, anybody?
 
Last edited:
Johan45 has done the comparison and the results have been on the front page for a couple of weeks now. Link There is no mystery. Like in the past, Intel leads in some benches and AMD leads in others. Both are power-hungry heaters but I think Intel has a slight edge in that department.

I have never seen Apple included in the comparisons. Primarily because Apple hasn't been considered a "competitor" for as long as I can remember (which admittedly is getting shorter every year). Maybe that should change.
 

I posted some benchmarks earlier in the thread as well. :)

For those who may not want to click, it takes a Macbook Pro with the M1 Ultra chip in it to match a 12900HK (mobile) and 3080 Ti in their limited testing. While the Intel machine is still more expensive (Not by too terribly much now... and you can game on it really well, not so much with mac I guess, even with M1 ultra - so you'll have to buy a gaming machine if you Mac) the price difference isn't huge.


You practically need an 8ghz x386
No. And I think you meant x86 (architecture) as 80386 CPUs are like, late 1980s. You'd need a 1.21 jiggawatt 386 to get there. :p


Another comparison (desktop Intel).... https://www.eurogamer.net/digitalfo...ultra-and-m1-max-take-on-high-end-pc-hardware

The M1 Ultra is an extremely impressive processor. It delivers CPU and GPU performance in line with high-end PCs, packs a first-of-its-kind silicon interposer, consumes very little power, and fits into a truly tiny chassis. There's simply nothing else like it. For users already in the Mac ecosystem, this is a great buy if you have demanding workflows. While the Mac Studio is expensive, it is less costly than Apple's old Pro-branded desktops - the Mac Pro and iMac Pro - which packed expensive Xeon processors and ECC RAM. Final Cut, Photoshop, Apple Motion, Handbrake - pretty much everything I use on a daily basis runs very nicely on this machine.

For PC users, however, I don't think this particular Apple system should be particularly tempting. While CPU performance is in line with the best from Intel and AMD, GPU performance is somewhat less compelling. Plus, new CPUs and GPUs are incoming in the next few months that should cement the performance advantage of top-end PC systems. That said, the M1 Ultra is a one-of-a-kind solution. You won't find this kind of raw performance in a computer this small anywhere else.

Gaming on Mac has historically been quite problematic and that remains the case right now - native ports are thin on the ground and when older titles such as No Man's Sky and Resident Evil Village are mooted for conversion, it's much more of a big deal than it really should be. Perhaps it's the expense of Apple hardware, perhaps it's the size of the addressable audience or maybe gaming isn't a primary use-case for these machines, but there's still the sense that outside of the mobile space (where it is dominant), gaming isn't where it should be - Steam Deck has shown that compatibility layers can work and ultimately, perhaps that's the route forward. Still, M1 Max and especially M1 Ultra are certainly very capable hardware and it'll be fascinating to see how gaming evolves on the Apple platform going forward.

I would like to see more H2H testing now that it's competitive. But it feels like it's still a one-sided machine if you can't really game well on it. So the little money you may save with a competitive Mac means a significant sacrifice in gaming comparatively. Whereas if you buy a PC for a bit more, you lose a bit in some productivity (faster in others) but can game fully with it. Obviously, if you don't game, this doesn't matter, but if you use your PC for gaming, feels like it still makes sense to get x86.


Also... Most of the stuff from this thread has just dealt with just getting the CPUs up to 8ghz.
Well, that's the thread title, yes. You brought up Macs, and we were off on a tangent. I'm happy to split the thread off so this one lives as it was designed/titled and we can have a discussion about M1 chips vs PC. :)
 
Last edited:
I would like to see more H2H testing now that it's competitive. But it feels like it's still a one-sided machine if you can't really game well on it. So the little money you may save with a competitive Mac means a significant sacrifice in gaming comparatively. Whereas if you buy a PC for a bit more, you lose a bit in some productivity (faster in others) but can game fully with it. Obviously, if you don't game, this doesn't matter, but if you use your PC for gaming, feels like it still makes sense to get x86.

Well... no. Not if you're into what we were talking about (namely graphic design, video editing, etc...)

Because it's not good enough that they perform similarly... The entire experience is different on the Mac. You don't need nany third party drivers and extra pieces of software and codecs to get optimal performance on a Mac. You install whatever editing software you have and you're off to the races.

Best of all, you can then drop to an actual Linux terminal where you can fine tune code or run ffmpeg, or get all kinds of crazy stuff going on in the background that you would need VMware or another piece of software, or maybe a whole SUITE of sotware to do on a PC.

And with the money you saved you can just buy an Xbox if you want to play games. :)

That's basically where I wound up when the whole RTX 3080 for 700 bucks dream fizzled-out back in 2020.
 
Because it's not good enough that they perform similarly... The entire experience is different on the Mac. You don't need nany third party drivers and extra pieces of software and codecs to get optimal performance on a Mac. You install whatever editing software you have and you're off to the races.
Excuse my ignorance here, but you don't need to install drivers when you setup a Mac OS? Is that the benefit of a closed hardware ecosystem, I suppose?

After the initial setup (windows installs a lot of drivers) you install your software and go on PC too, no??

Also, it comes with all your editing software?!! Additionally, you don't need to install any of the 'crazy *** stuff going on in the background'/software? If so, that's a value add for sure...otherwise, I'm not seeing the difference between it and a PC past a driver installation.


In 2022 (actually going back to 2020)... this is no longer the case.

...A Mac Studio M1 or M2 max would probably run you about $1600.

And with the money you saved
What money do you save? Help me see it a different light...

Apples with the M1 Ultra (the chip that's comparable to last-gen Intel flagship) START at $4000, right (Mac studio)? The Mac Studio with the M1 Max (notably slower than a 12900k) starts at $2000. The MacBook Pro starts at $2500 for the M1 Max. For $2000, you're into a faster PC (according to the benchmarks I linked) with a last-gen flagship intel CPU and RTX 3080. For $4000, you could build a PC that is significantly faster (current-gen) flagship CPU, especially in GPU-accelerated functions since you can pop in an RTX 4090 with a 13900K (which handily beats a 12900K across the board - or hell the AMD 7950X for that matter).

I don't know much about your workflow, but W10/11 has WSL, so no VM software is needed for full Linux. I think you can DL Linux at the Windows store, lol. If there's a terminal built-in to Mac, that's cool. I don't know why you'd need a VM for that kind of workflow in Windows though.

Please don't misunderstand me. These have their place. If you need or want a Mac and it works better for you for what you do, it's a no-brainer. But I'm not sure Mac and value ever went hand in hand, and the napkin math about cost above is telling. Apple PCs are a niche market. It carries a premium price tag and doesn't quite give you all of the functionality (and none of the upgradeability) of a comparable PC.

As a Mac lamen, I just searched info for performance, looked up pricing, and see it differently. If I'm missing something, please share! Otherwise, I'm not selling what you're shoveling, lol. Worth mentioning this is solely speaking about graphic design, video editing, etc. The first time game was mentioned in this post was just now, lol.
 
As a Mac lamen, I just searched info for performance, looked up pricing, and see it differently. If I'm missing something, please share! Otherwise, I'm not selling what you're shoveling, lol. Worth mentioning this is solely speaking about graphic design, video editing, etc. The first time game was mentioned in this post was just now, lol.

Oh my mistake... I don't know WHERE I could've possibly read that...

Whereas if you buy a PC for a bit more, you lose a bit in some productivity (faster in others) but can game fully with it. Obviously, if you don't game, this doesn't matter, but if you use your PC for gaming, feels like it still makes sense to get x86.

Not sure what YOU'RE trying to shovel! hahahaha :D

Never heard of WSL before now. I'll have to look into that. I'd be blown away if you can just suddenly drop to a Linux terminal on a PC and have a full Linux environment. That might actually change some purchase decisions for the office I'm trying to put together... (if it's true of course.)

Earthdog:


What money do you save? Help me see it a different light...

Apples with the M1 Ultra (the chip that's comparable to last-gen Intel flagship) START at $4000, right (Mac studio)? The Mac Studio with the M1 Max (notably slower than a 12900k) starts at $2000.

Now you're getting there. You don't NEED the M1 Ultra for comparable performance. Video editing is about a lot more than render times. Let's just take BlackMagic Davinci Resolve for example:

My previous build (somewhere in my sig...) was like a GTX 960 w/4GB of RAM and I think the 4690K until I updated to my current CPU (which I believe you recommended) the 9600KF. At the time Resolve required like a minimum of 32GB of RAM for Fusion (their answer to Adobe After Effects). I was limited in what I could do in Resolve with the 16GB that I had. So fine... added another 16GB of RAM. All was well...

...until I wanted to run some animations. Nothing crazy mind you... Just clouds. I couldn't even get it to render! Wasn't even trying to do realtime playback or anything. I just wanted it to RENDER. Wasn't happening. That's why I bought the 2060 Super at the beginning of the pandemic (can't remember who recommended that). After that, with the 8GB of vRAM... both rendering and realtime playback were no longer an issue.

Most of my projects these days are either 4K RAW or proRes. But here's the deal: I can do ALL of that with my $969, 8GB Macbook Air M1.

No dedicated GPU... No 32GB of RAM. No crashing. No stuttering.

And it runs like a DREAM. After all that upgrading I've rarely gone back to my PC to edit since.

I mean who cares if a 12900K can render 30 seconds faster or not when you can do so much more with less on a Mac?

I could write a whole dissertation on the advantages of Apple's M architecture... but despite all outward appearances... I'm not trying to hijack the thread. :) I was just curious about how 8 ghz translated into actual performance and if Intel finally had an answer to the overall efficiency of the Apple's M Architecture. Because the Collider video that I posted, at least, made it seem like it wouldn't be possible with the x86 architecture.

I haven't been keeping up with Intel though... ever since you and a few other people told me that AMD had somehow gained the upperhand in the CPU arms race. (I think that was a thread about gen 4 SSDs and whatnot...) I've been trying to catch up with what AMD has been up to since I haven't used one of their CPUs since just after I joined these forums... So things with Intel have changed in the two years since that Collider video (I already mentioned tech moves pretty fast)... I just wouldn't know about it.
 
All I know is that several benchmarks show a 12900K is as fast/faster/ballpark as the M1 Ultra across an array of different tests. The pricing information is from the Apple website...just trying to compare like things, new vs. new and the benefits/value vs. performance.

I understand there are going to be some functions that may be better or worse in a Mac versus PC. That said, I'm not surprised a $1000 Macbook can handle more than a four-year-old budget processor (6c/6t) 9600KF and an eight year old GTX 960, lol...but few would think otherwise in the first place.

It's clear we're two ships passing in the night, so I'll just leave it alone. :) :thup:
 
If you've been trying to catch up since PCIe Gen4 then you're still a generation or two behind. If someone here says "AMD has the upper hand" that doesn't mean that AMD is the best CPU for the next 2 years, Intel could release something 3 months later, or vice versa. Honestly they are both so closely competitive at this point it depends on a case by case basis. The AMD 5000 series really challenged Intel, but that release was over 2 years ago. Since then Intel has released some pretty crummy responses (11th gen) but also more or less caught up to the 7000 series with the 13th gen. So again case by case.

It behooves someone who is seriously into productivity workloads to look at the testing done at Puget systems and review other benchmarks focused on the specific workload you mention. It never hurts to ask a group of gamers, overclockers and oldschool enthusiasts "what's good these days?" but it's not realistic to expect someone to advise you of a system that can cover every possible workload such as "rendering clouds in Resolve" if that wasn't a specific request of the systems planned utility. Especially not at a budget. I mean you mention yourself the minimum requirements of the software was more ram than you had, to some extent the onus is on the builder to review the program requirements for what you want to run. Of course I haven't read the thread where this system was recommended, but it seems like an odd jab.

But to get back on topic with your original question, these 8+GHz world records are set using CPUZ validation which is a very light load, intended only verify the clock speed. These records are not set with stable or even benching stable overclocks, and I doubt any attempts to complete a realistic productivity workload on one of these world record LN2 systems would be equal parts infuriating and hilarious.
 
If you've been trying to catch up since PCIe Gen4 then you're still a generation or two behind. If someone here says "AMD has the upper hand" that doesn't mean that AMD is the best CPU for the next 2 years, Intel could release something 3 months later, or vice versa. Honestly they are both so closely competitive at this point it depends on a case by case basis. The AMD 5000 series really challenged Intel, but that release was over 2 years ago. Since then Intel has released some pretty crummy responses (11th gen) but also more or less caught up to the 7000 series with the 13th gen. So again case by case.

It behooves someone who is seriously into productivity workloads to look at the testing done at Puget systems and review other benchmarks focused on the specific workload you mention. It never hurts to ask a group of gamers, overclockers and oldschool enthusiasts "what's good these days?" but it's not realistic to expect someone to advise you of a system that can cover every possible workload such as "rendering clouds in Resolve" if that wasn't a specific request of the systems planned utility. Especially not at a budget. I mean you mention yourself the minimum requirements of the software was more ram than you had, to some extent the onus is on the builder to review the program requirements for what you want to run. Of course I haven't read the thread where this system was recommended, but it seems like an odd jab.

But to get back on topic with your original question, these 8+GHz world records are set using CPUZ validation which is a very light load, intended only verify the clock speed. These records are not set with stable or even benching stable overclocks, and I doubt any attempts to complete a realistic productivity workload on one of these world record LN2 systems would be equal parts infuriating and hilarious.

Here's how "productivity" works in the real world:

1. You buy something like DaVinci Resolve (or RENT something like Adobe After Effects and Premiere) five or ten years ago. (We'll say 2012.)

2. Over time there are upgrades. Some more... some less significant in terms of System Requirements.

3. Then there are massive technological LEAPS.

4. The manufacturer is given the choice of optimizing their software for the new technology (which is what BlackMagic did with Resolve when the M1 chip came out) or giving a more subtle update to the 10 million or so steady customers they've had for the past ten years (which is Adobe's typical move with Premiere and After Effects.)

5. Since using Resolve outside of the massive "studio" (think Warner Brothers, Sony Pictures, etc...) level is relatively new and the install base is still a fraction of Adobe's... BlackMagic can afford to make massive changes in minor .1 updates.

6. So... one minute... you've been using the software with all the bells and whistles just fine for the past three or four years... and a .1 update later... you can barely run it at all.


At that point there ARE NO reviews. BlackMagic will drop the software the second it's ready. Update at your peril. But you updated the last twelve times over three years and everything was fine, right? NOT THIS TIME!

So yeah... MAYBE the onus is on the user to inform themselves... but that's only when such information is available. (Also... I miswrote earlier... 32GB wasn't the minimum... it was the RECOMMENDED amount of RAM. Apologies for the confusion.)

But to get back on topic with your original question, these 8+GHz world records are set using CPUZ validation which is a very light load, intended only verify the clock speed. These records are not set with stable or even benching stable overclocks, and I doubt any attempts to complete a realistic productivity workload on one of these world record LN2 systems would be equal parts infuriating and hilarious.

I think it's hilarious that people are still chasing "clock speed" in 2022. I mean what does "8ghz" even MEAN now? That was sort of the heart of my question (trying to get back on topic) what could anyone actually DO with 8ghz? Your answer seems to be: "Nothing."

So then what's the point? I seem to remember tricks where you could get like a ten year-old laptop to read 12ghz... or more... just long enough for you to get the screen shot. This was probably cute 20 years ago... but what good does it do you now when there are so many more variables that go into PC performance?

I guess that's where my overall confusion comes from. What IS the most important variable now? Is it the nm? Is it the energy efficiency? Clock speed? Bus speed? RAM speed?

What variable would actually have any mean anything these days?

Back in the Core2 Duo/ E6X00 days... things were much simpler. Your overclocked E6800 or whatever at 5ghz should beat the brakes off of my overclocked E6400. (Given same RAM, motherboard, etc...)

Now it's as you've said... hot potato with the crown.
 
I mean it is what it is. It's a new world record, one that hasn't been broken since bulldozer, a quad core that was released 10 years ago. It's an overclock, many of us here at Overclockers.com happen to be interested in Overclocks. The same could be said for any extreme OC, why is someone going to spend hours slathering PC hardware in vaseline, setting up LN2 pots that can't be used for any normal system, foam insulation, socket heaters, etc, only to pour LN2 onto a PC and try to run some insane clocks, optimize for a specific benchmark, only to tear it all down and call it a day as soon as a little condensation makes it into the socket?

They do it for entertainment, as a hobby. Nobody said it was going to make a difference in real world applications. The fact that the newer processors make a difference in real world applications is a different subject entirely.

Regarding your software issue, it sounds like nobody could have recommended a better / worse system unless they were an engineer at apple working on M1 or an engineer working on that software. Its tough that you're doing business with a company like that. On the other hand if most of their customer base is cutting edge and goes out and buys the new hardware the day it's released, it behooves them to cater to that, even if it shanks the little guy who spent months or years worth of savings on hardware and software, because it's going to save the big studios millions or billions.

Regarding comparing the E6800 to E6400, you'll have to forgive me if I'm missing something, because I had a working system and was busy with school in the Core2 days, but it sounds like comparing a faster vs slower CPU in the same family with the same architecture (even same cache and same bus speed), so yes it would come down to clock speeds. But different architectures have always been better/worse at certain workloads. Back in the Athlon XP days, it was said that AMD CPUs had a shorter pipeline and had to execute fewer instructions, and would compensate for lower clock speeds in workloads that didn't have good branch prediction (because the longer pipeline would have to be "flushed" out if a branch was mispredicted, causing a greater penalty to the faster Intel CPU with a longer pipeline). Of course this was my understanding of something, and how I remember it 15 years later, and I was never an engineer of any sort much less a CPU engineer, so please forgive any inaccuracy of this description. All I am trying to say is that different CPU architectures always performed differently under different workloads. There were definitely times with more or less competition in the market, perhaps the Core2 was one of those times, so it makes sense that people didn't have to worry about application specific performance if there was only one decent choice in the first place.
 
Back