• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FRONTPAGE AMD Ryzen 9 3900X and Ryzen 7 3700X CPU Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
ED, prelim data shows that BF5 has increased per-core utilization when it comes to RT. The number of comprehensive tests with RT is scarce to find and may require personal testing to validate these answers.

My next build is targeted for Cyberpunk 2077 which is coming out (hopefully) in April. There won't be any better cards out for quite some time, or at least rumor has it. So I'm looking to pair a well rounded CPU with a 2080Ti and try to anticipate the the work load utilization between the CPU and GPU. Difficult, stupid, and above all moronic, but whats a computer enthusiast to do for 6+ months?
BF5, regardless of RT being enabled can utilize (not use) several cores and threads already. I've never tested it out, but I dont imagine the use to he that much more.


Unrelated, I agree with mr alphas assessment of things and to go 3700x. It's going to take a while to get there, but 8c/16t cpu will last at least a few years without being core/thread starved.
 
Does any game utilize 6 or more CPU cores? If it doesn’t utilize all 8 or 12 cores, why step up?

For Cyberpunk 2077 interested folks:
The specs on the computer they played the E3 2019 demo:
CPU: Intel i7-8700K @ 3.70 GHz
Motherboard: ASUS ROG STRIX Z370-I GAMING
RAM: G.Skill Ripjaws V, 2x16GB, 3000MHz, CL15
GPU: Titan RTX
SSD: Samsung 960 Pro 512 GB M.2 PCIe
PSU: Corsair SF600 600W

Got a bit nervous, but if they're doing demos at E3 of course they are going to use a fairly monster setup, gotta hype the eye candy and all that, lol.

Got to make a judgemental call over the weekend. Stuck down near the Rockville MC for work again. To do a full upgrade or a partial upgrade, tis the question...
 
Does any game utilize 6 or more CPU cores? If it doesn’t utilize all 8 or 12 cores, why step up?

Don't forget it's not just the game for a large number of us, for example, I play World of Warcraft (new expansion added working DX12 which uses all cores available depending on load) + stream with OBS + usually always have music/film and browser on in the background. Some have 2+ monitors for multitasking. For me specifically a 3700x would be much more useful then a 3600x - but a 39**x would be complete overkill I guess [emoji39]

You're really asking "why step up" in a PC enthusiast overclocking forum ? Because science ofc [emoji16]
 
Don't forget it's not just the game for a large number of us, for example, I play World of Warcraft (new expansion added working DX12 which uses all cores available depending on load) + stream with OBS + usually always have music/film and browser on in the background. Some have 2+ monitors for multitasking. For me specifically a 3700x would be much more useful then a 3600x - but a 39**x would be complete overkill I guess [emoji39]
I did not consider the background stuff in addition to the game itself. :facepalm:

You're really asking "why step up" in a PC enthusiast overclocking forum ? Because science ofc [emoji16]
Yes, very true. TBF, we were discussing practical applications of going up to the 8, 12 core chip. Which you did explain very well. TY. :thup:

Got to make a judgemental call over the weekend. Stuck down near the Rockville MC for work again. To do a full upgrade or a partial upgrade, tis the question...
Still plenty of time before its release and the pc specs will be shared by then. I’m probably going to hold out for Black Friday/Cyber Monday. More time for any new hardware (looking at you AMD vs the 2080 Super) and iron out any kinks with all the new hardware that has been released recently.
 
Last edited:
One other thing I forgot to mention, I keep seeing more and more programs that sit in the background (near the clock) with settings like "use hardware acceleration". Even though they are technically more or less paused while minimized some of them are still using resources - one good example is the Blizzard launcher, if you disabled hardware acceleration there used to be a noticeable speed up in loading times in WoW and a very light fps bump, I would imagine that any others like Twitch/Discord/Origin/Steam won't be far off from the same behavior. They seem to use both CPU and GPU time.
 
New bios got put up for the Asrock B450 ITX board I have, so I'll be trying that out shortly. Let's see if that improves the ram situation going from AGESA Combo-AM4 1.0.0.1 to 1.0.0.3. My X370 board is on 1.0.0.2 and does much better, but how much is board, how much is bios?

Also LTT put up a Ryzen ram speed test on their early access platform. The very short version was that slow (2133) but low latency ram can do well if you crank up the IF. So there's still hints of that being a significant performance influence. 3600 was the sweet spot as AMD stated, and above that with decoupled IF, you win some, you lose some. Aggressive reducing of timings at a decent speed also gave benefits. Tests were gaming focused, apart from Cinebench R20 which didn't really care, no surprise.
 
Asrock B450 Gaming-ITX/ac bios 3.40 AGESA Comb-AM4 1.0.0.3 made a big difference with Kingston HyperX HX440C19PB3AK2/16 ram over 1.0.0.1. With the older bios, it wasn't booting at XMP3600 profile even reduced to 3400. With 1.0.0.3 bios, it is booting both XMP3600 and XMP4000 profiles and seems stable in the little use I had of it. Note this part number is different than the otherwise similar looking ram Woomack tested. Mine has the extra "A" in the part number so maybe it is a later revision.

A quick aida at bios picked timings, note the increase by 1 to even values. It wasn't doing this at 2400 where it was running C17, but I didn't run a test of that.

aidaram-3600.png

aidaram-4000.png

That crippled write bandwidth makes me sad.


I also did some Prime95 tests. Like I saw on Intel, the 3600 with tighter timings is giving better results than 4000, even where bandwidth should be limiting.

1024k FFT, 6c2w, 0.8% faster
4096k FFT, 6c1w, 3.5% faster
5120k FFT, 6c1w, 6.8% faster

At 1024k FFT, the work better fits in each CCX so two workers gave highest throughput. It should fit in the L3 cache so ram shouldn't affect performance, and we see a small insignificant difference.
At 4096k FFT, the work borderline fits in the total L3 cache, strictly with non-FFT data it will exceed it, so there is some ram access. We see more of a benefit here.
At 5120k FFT, we're for sure exceeding the L3 cache. With more ram traffic, we see more of a difference again.

My guidance for ram speeds, which at the time I had only tested between 2133 and 3200, was that bandwidth was king, and latency hardly made a difference. Maybe at these higher bandwidths, latency is more important than I saw lower down. Gonna have to re-evaluate my guidance there once I get more data.


Anyway, next steps I'll have a play and see if I can get the ram running at lower latency at either speed. I'll concentrate more on 3600 though.
 
aidaram-3600custom1.png

Not optimised by any means, and needed 1.5v to stabilise but I'm not complaining. Note reads/copy went up, but so did latency. Hmm... maybe I turned some values up too far when trying to stabilise. Seemed good old tRFC and voltage were the two that made most difference to stability. Dabbling with TestMem5 as a stability check, any comments on that? It seems to run and detect errors faster than aida64.

This setting got me another 5.4% throughput improvement over XMP3600 at 5120k FFT, and 3.3% at 4096k. Gonna get some sleep, but in the morning I'll have another go with the timings, and maybe try same at 4000 or higher. This new bios certainly has opened the world of high speed ram to these new CPUs.
 
I don't frequent their forums much, who is Shamino? Do they work at Asus or just an advanced user?

Been playing a bit more with ram this morning. Not much progress to report. Tried tightening timings even more at 3600 but that just bought errors back so I'm calling that done. Tried booting at 4200 at XMP4000 settings, no boot. Currently testing lower timings at 4000, see how far I get with that.

Also played a little with IF OC. Is there any consensus on OC that yet? On my 3600, it seems stable at 1866 with P95 loading up the core-L3 connectivity, but 1900 is definitely unstable. Seeing as IF is outside the cores, could SoC voltage help there I wonder?
 
Ok, I don't follow the extreme overclocking scene so basically the only one I know is der8auer. He does make it onto many youtube videos outside his own channel, which I can't watch as he's really boring :)

Back to my Zen 2 adventures, I think I need to take a break soon, before I lose whatever little sanity I have left. I continued on trying to get the timings down at 4000, only achieving 16-17-17 at 1.45v. Gave a marginal improvement over XMP4000 but 3600 stock or tight timings just runs better overall. Maybe 4000 would have more to gain from tighter secondary/tertiary timings but I've not the motivation to find out right now.
 
I think I posted a ways back that 3600 cl 16 and 4400 CL 17 was the break-even point in my testing
 
For me, with R5 3600 it was more like 3600 CL16 vs 4600-4700 CL20-22 on Micron IC. I guess it's more like a crippled bandwidth on lower chips that makes this difference. I haven't seen any significant difference between CL16, 18, and 20. +/- 2ns latency but it doesn't matter much. What matters the most is to hit 3600-3800 mem clock stable. In this case, more popular 3600 CL18 kits seem a good idea looking at performance to price ratio.

On Gigabyte so far I couldn't stabilize memory at 3800 1:1. It's more like IF clock is too high and nothing is helping to stabilize that. 3733 runs 100% stable and problems are starting at 3760+. It passes even more demanding benchmarks at 3800 but crashes in ~15 mins in stability tests.
I set my R5 3600 at 4.1GHz (102x40.25) 1.285V(reading in soft), mem @3733 16-18-18 1.35V. All, in this ITX case (it was on sale :) ) https://en.sharkoon.com/product//CAI with Noctua D9L cooler. It still hits 90°C+ under max load but I guess it's my CPU as no matter what I do, coolers designed for 150-180W TDP give me 90-100°C+. Noctua D9L was about 10°C better than the Cryorig C1.

Ok, I don't follow the extreme overclocking scene so basically the only one I know is der8auer. He does make it onto many youtube videos outside his own channel, which I can't watch as he's really boring :)

Think of that like der8auer is an extreme overclocker who makes mostly marketing stuff while Shamino was an extreme overclocker who was and is also a hardware engineer and was behind every popular OC brand series. Without him wouldn't be some popular series from brands like Foxconn (Foxconn OC and consumer boards don't exist since he left), when he moved to EVGA then started revolution there, later moved to ASUS and was pretty much what made ROG series so popular in first years.
I remember Shamino when he was on VR-Zone (when it was still enthusiast forum and not only news website). He was publishing a lot of hardware mods and was helping many users.
Let's say that Shamino is one of not many overclockers that I really respect and the list is not really long.


Btw. Looks like Gigabyte doesn't like Micron at tight timings. On Intel Z390 the same memory was working better like 3600 15-15-15 or 3733 16-16-16, on X570 16-18-18 is the lowest I can set.
I started some tests with Samsung /HyperX 4000C19 on X570. So far just first boot at 3733 16-16-16 1.35V.
 
Last edited:
I set my R5 3600 at 4.1GHz (102x40.25) 1.285V(reading in soft), mem @3733 16-18-18 1.35V. All, in this ITX case (it was on sale :) ) https://en.sharkoon.com/product//CAI with Noctua D9L cooler. It still hits 90°C+ under max load but I guess it's my CPU as no matter what I do, coolers designed for 150-180W TDP give me 90-100°C+. Noctua D9L was about 10°C better than the Cryorig C1.

I think I have the micro-ATX version of that case. If so, the airflow is awful, and the images for that one don't look any better. Did you modify it at all?

Think of that like der8auer is an extreme overclocker who makes mostly marketing stuff while Shamino was an extreme overclocker who was and is also a hardware engineer and was behind every popular OC brand series. Without him wouldn't be some popular series from brands like Foxconn (Foxconn OC and consumer boards don't exist since he left), when he moved to EVGA then started revolution there, later moved to ASUS and was pretty much what made ROG series so popular in first years.
Is he working for Asus now?

I only have a loose interest in really extreme OC, like replacing power delivery levels of modification. My career started in electronic engineering but I'm way past my interest point to do anything so deep nowadays.
 
Back