• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

AMD Ryzen 9 3950X vs Intel Core i9-10980XE

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

DaPoets

Member
Joined
Aug 23, 2007
Currently I'm running an 8700k delid @ 5.3Ghz full custom loop w/ 3 radiators cooling it. I do some gaming (Battlefield V, Planetside 2, maybe ArcheAge) but lately I am learning more about video editing as my wife is an actress and some of her auditions are "self tapes" that we record, edit (sound, color correct, cuts, etc) and I am starting a youtube channel for my own investment firm business. I have been using the Note 10 Plus for recording and editing videos on the go and it's surprisingly powerful for that, but for more marketing based videos I'll be using the PC.

So if I'm say 10-20% gaming, 20-40% video editing, and the rest is just random PLEX transcoding (about 15TB of movies/tv shows) and MS Office applications for work.

Last bit, yes I plan on overclocking until it smokes.

AMD seems like it's not all that fun and OC capable in general, meanwhile the word on the street is the i9-10980XE is hitting 5.1Ghz ALL CORE on water. If that is true then the 10980XE will be simply a gaming/rendering monster.

Any thoughts? I'm leaning toward the Gigabyte X299X Master w/ the i9-10980XE.

(yes, either are overkill but which is more overkill for my needs so I'm happy)
 
No thoughts here really until either cpu comes gets released, benchmarked, and overclocked. Either way, single threaded or multi threaded, the intel will beat out or at least match the lower core count cpu. How much you value whatever difference that may be is up to you....and of course we need to see the benchmarks.
 
Yeah, I was considering ordering all of the parts on day 1 availability so I just wanted to get a feeling of what people think and I'll be posting as many benchmarks here as I can when I get one of them.
 
The 10980XE will have more core and memory channels and will outperform the 3950X but it's going to come at a significant price difference
 
The 10980XE will have more core and memory channels and will outperform the 3950X but it's going to come at a significant price difference

Yeah that's what I'm kind of thinking about the performance and I'm not concerned with the price. Plus I have been an Intel guy since my 486DX2 days so I am biased still lol.
 
Yup. $750 vs $999(rumor)??

Given the 3900x situation we have to wonder if the 3950x situation will be any better, at least for the short term once it is released.

It is more difficult than ever do choose. 3950x is still a consumer platform part, but the cores are pretty strong now and do make up for its lower clock. 10980XE OC I'd still be cautious about. That reminds me, I still haven't tried putting my recently bought 7920X under water yet, it's proving quite warm on air. Anyway, the Intel side does also get you more ram channels and AVX-512 instruction set, should you have anything that can make use of it. Then the Zen 2 based threadrippers are also supposed to be out around the same time, even more cores for who knows how much, but it should be less than whatever Intel want.
 
I'm an idiot... I would possibly be using AVX-512... I'm an investment advisor and I do some financial analytics and doing any x264 x265 video encoding also uses AVX-512.
If I'm going to spend the money, why miss out on this interesting feature I guess.
I think I kept seeing $979 for the i9-10980XE.
 
https://www.techpowerup.com/260317/...intel-core-i9-10980xe-by-24-in-3dmark-physics

"AMD's upcoming Ryzen 9 3950X socket AM4 processor beats Intel's flagship 18-core processor, the Core i9-10980XE, by a staggering 24 percent at 3DMark Physics..."

"When paired with 16 GB of dual-channel DDR4-3200 memory, the Ryzen 9 3950X powered machine scores 32,082 points in the CPU-intensive physics tests of 3DMark. In comparison, the i9-10980XE, paired with 32 GB of quad-channel DDR4-2667 memory, scores just 25,838 points as mentioned by PC Perspective. Graphics card is irrelevant to this test."
 
How did Zen(+) do on Physics? How does the 3900X do? I'm wondering if this is a natural progression now that they beefed up AVX performance, and throw more cores/cache at it. Just in general, what hardware does the Physics test like? If memory serves Intel HEDT was often used for past records.
 
Next news full of pure marketing. Why people write news like that when they're totally pointless. 3DMark Physics test says nothing to gamers or pretty much any other users. Even those into competitive benchmarking rather focus on a total 3DMark score which is based mostly on graphics card performance. In 3DMark total score, a higher CPU clock will be always better, not to mention that 3D tests don't use more than 1-4 CPU threads. If they at least used combined test then we could talk about any reasonable comparison.
 
I posted this at TPU... but will here as well....

My 7960x @ 4.4 GHz and 3600 MHz CL 16 memory manages 33.3K compared to 32.0 of the 3950x. I wonder just how low the all-core boost clock is on that part to lose by 24% considering I can beat it with similar clock speeds on last gen's part.

3dm.jpg
 
Last edited:
I wonder just how low the all-core boost clock is on that part to lose by 24% considering I can beat it with similar clock speeds on last gen's part.

"Although AMD doesn't mention a number in its specifications, the 3950X is expected to have an all-core boost frequency that's north of 4.00 GHz, as its 12-core sibling, the 3900X, already offers 4.20 GHz all-core. In contrast, the i9-10980XE has an all-core boost frequency of 3.80 GHz. This difference in boost frequency, apparently, even negates the additional 2 cores and 4 threads that the Intel chip enjoys, in what is yet another example of AMD having caught up with Intel in the IPC game."
 
Yes. I know. We just don't know what EXACTLY it is. :)

If the trend of high clocks on more cores/threads continues, I wouldn't be surprised if it was at 4.2-4.4 GHz. That said, its base is actually supposed to be LOWER than the 3900x, so who knows...

... time will tell.

Also, you quoted me asking about the INTEL part... I was wondering what its all c/t boost clocks are there. But the AMD chip we don't know much either.
 
Last edited:
In both cases, the boost will vary depending on workload and any other settings relating to boost. For example, on the recent 3700X testing I was doing, I tried stock (88W PPT) and PBO (OC) on - PPT unlimited, observed 138W max. On the Intel side at stock, it may be running at all core boost, all core AVX boost, power/current limits may kick in. How was PL2/tau set? Was MCE (OC) set?

I think a more controlled test is needed to better understand how those results were obtained, and how they fit in with stock or OC behaviour. I wonder if I could estimate them by observing other CPUs in their respective families? That could only work if the test itself is not so much ram dependant, but scales well with cores/clocks.
 
I'm not so much worried about the test per say than I am about knowing the actual clocks achieved on both parts when running it. Since this isn't from AMD, I'd venture a guess PBO is enabled but not tweaked... which yields almost no difference in performance.

Either way, there are a lot of questions about this result... that is for sure.
 
I'd venture a guess PBO is enabled but not tweaked... which yields almost no difference in performance.

Interesting comment... now, I've only done testing with Prime95, and that in itself may be the reason for what I saw. With stock running (PBO off) on the 3700X it hits the 88W PPT limit. I can't remember what clock that was, but I think it was around 3.9 GHz. With PBO on, it went up to 35% of 395W, or the 138W value I gave. Clocks certainly went up. I don't have the notes on me right now. I recall it was well over 4 GHz.

I'd guess for less intense loads than P95, if the CPU is able to boost to its maximum without exceeding 88W PPT, then unlimiting that would make no difference.
 
P95 isn't something I test performance with as its a stress test... but i know you use it. How it 'performs' I have no idea. :)

But yeah, in my experience, going from 'defaults' with memory set (XMP) and nothing else changed but to enable PBO, The most difference I recorded was 3% across a wide variety of testing. Mostly results were ~1% difference... measurable, but not tangible.
 
Back