• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Low core count Rome gen. AMD EPYC CPU's: some questions

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

magellan

Member
Joined
Jul 20, 2002
I've been looking at some older model AMD EPYC CPU's (EPYC Rome 2nd Gen 7000 Series) , they're still expensive (~$460), but they have 128 lanes of PCIe 4.0, 128MiB L3 cache, 8 cores/16 threads and 8 DDR4 memory channels. Comparatively, the RYZEN 7 7800X3D is $449.00, but has a lot less PCIe 4.0 lanes and memory channels.

1. Considering the gigantic L3 cache (same as the 7950x3D) how would such CPU's work for gaming?

2. Would it be possible to tune the 8 DDR4 memory channels so you had all the bandwidth of the latest dual channel DDR5 rigs but less latency?

3. do the motherboards for such CPU's still use standard ATX PSU connectors?

4. are the motherboards for such CPU's eATX format? Or some specialized format?
 
1. Not well? There are significant differences in IPC and clock speed between 7000X3D and dated crossover server parts that make up the whole of performance. I'd bet good money says even the 7000 series non-X3D chips are still better gaming chips than dated Epyc, even with the significant cache difference. IIRC, Epyc chips are slow, like most of that gen doesn't reach 3 Ghz slow (remember, it's a server chip and meant for wide, not necessarily fast) compared to over 5 GHz, and still use a lot of power for the speed it brings compared to today.

I'd only buy that system (or what was known as HEDT) if I could utilize the 128 PCIe lanes, which 99% of users like us simply can't....unless you're stacking your PC with high bandwidth AICs, but then you're getting out of what most would consider normal home-use cases (rocking data center-class hardware/functionality).

2. Dunno offhand...however that platform is extremely limited in speed (DDR4-2400/2600) compared to non-server DDR4, and of course DDR5. Unsure of the quad-channel makes up for it. That said, surely there are some images of AIDA around the web which may shed some light on the situation. Capacity/# of sticks limits an overclocking headroom as well.

3. Many, (most? All?) do, yes.

4. Varies by board....pretty sure there are ATX size (Gigabyte has one I can think of) along with other FFs. But you can easily check that out at mfg websites too. ;)

EDIT:
Please correct me if I am wrong here.. for this thread if the chip had 128MiB of cache as you say, that means it has 134MB...........which is not how AMD (or anyone) reports it as the chip has 128MB. I believe the correct way to say that if you were to insist on using binary is ~122MiB, as it is advertised at 128MB. For storage, when you mention 500GiB, that's actually 534GB when the drives are advertised at 500GB. A 500GB drive is 476GiB, not 500GiB. At least to me, when I see those terms used like this, it comes off as confusing or just plain old incorrect.
 
Server/workstation CPUs are designed for more threads and more RAM channels, but it helps only when multiple tasks are using RAM at the same time, and are loading data more often.
If you test it with AIDA64, which is optimized for multithreading in the latest versions, then you see a total/maximum theoretical bandwidth. In games, it doesn't work exactly in the same way. Run winsat mem from the command prompt, and you will see that the memory bandwidth will be lower than the results in AIDA64.

In another way, server CPUs can't use their maximum theoretical bandwidth in games, which you can see even comparing 2 vs 4 memory channels in desktop processors. There is barely any difference.
What counts the most in games is single-thread CPU performance (IPC helps CPUs with lower frequencies), storage read/data access, CPU cache performance (usually related to CPU frequency), RAM read/copy bandwidth, and access time (usually a balance between them is optimal).
Server CPUs have low base frequencies, low single-core performance, higher RAM latency, and delays caused by the memory controller. A large cache usually means it's slower or has higher latency.
 
+1 to Joe's post for reference
1MiB = 1048.58 KB
1MB = 1000KB
1MB = .953674 MiB
 
Memory sizes are ALWAYS quoted in binary, not decimal, because the address space of a CPU is in binary not decimal. Furthermore to express a cache size in decimal would be ludicrous because the cache lines are organized in binary, otherwise you would have cache lines that weren't a full 64 bytes in size.
 
On (old) Epyc having large cache, it is still NOT the same as an X3D part. The cache on the Epyc CPUs is spread out over all the chips, and do not behave like a single lump. On X3D CPUs, the cache is attached to the existing cache on the die, so it does behave like a single lump. It is important the data is kept together near the cores for gaming. With Epyc CPUs, to use all the cache you have to use all the cores, and we already know that is not optimal for gaming even on the 12+ core consumer parts.

On the GB vs GiB thing, through historic convention many places use the "wrong" units but it is kinda taken as a given. Ram values are GiB but listed as GB without conversion. Storage is sold as GB but Windows shows GiB values labelled as GB, leading to storage manufacturers losing lawsuits on the matter in the past when people thought they were sold less storage than advertised. Only my guess, I think a part of this conflict is that the computer industry has been around for a very long time. Only much later in 1998 did the IEC define the binary units. By then I think most of the industry was mostly entrenched in how they always did things and weren't about to change.
 
Memory sizes are ALWAYS quoted in binary, not decimal, because the address space of a CPU is in binary not decimal. Furthermore to express a cache size in decimal would be ludicrous because the cache lines are organized in binary, otherwise you would have cache lines that weren't a full 64 bytes in size.
You're viewing this from the perspective of a PC. Right it is, but hella confusing to say it that way, considering decimal is thrown in our faces for decades

I just laughed because if you search OCF for GiB, 95% of the posts are yours, the others used 'right' with memory... it's so rarely used it just throws simpletons like me off, I guess. :shrug:

Like... 500GB =/ 500GiB. It's not a 500GiB drive... It's a 500GB drive that Windows sees, 'cause binary, as 476 GiB (or w/e).

By then I think most of the industry was mostly entrenched in how they always did things and weren't about to change.
*Stands up on porch with shotgun in hand* GET OFF MY LAWN! CALL IT GB AND MB (except for memory) LIKE EVERYONE ELSE! :rofl:

Furthermore to express a cache size in decimal would be ludicrous
AMD is NUTS!

2.jpg

EDIT: Anyway, not sure I get it, but, riding with the 99% of people and advertising on this one 500GB will never equal 500 MiB in my mind!. :rofl:
 
Last edited:
The new AMD EPYC CPUs take up to TWELVE channels of DDR5 and PCIe 5.0 x128 lanes! But all that goodness costs nearly $4k for the 16 core/32 thread model. I don't see how Intel can compete w/AMD at all in the server market.

There's also the AMD EPYC 72F3, which has a relatively slow base clock of 3.7Ghz. but can support up to 3200MT/s for its 8-channel DDR4 memory and a theoretical B/W of 204.8 GB/s. All that for almost $2500.

ED, look at the 7371 EPYC Workload Affinity!

It's too bad the boost clocks are all so low for the EPYC line of CPUs.

If there were PCIe 5.0 videocards I'd imagine they would be capable of taking advantage of the massive memory B/W of EPYC CPUs and bus memory contention issues would be negligible when compared to a dual channel setup.

Is there anyway to overclock AMD Threadripper or EPYC CPUs or their memory?
 
The new AMD EPYC CPUs take up to TWELVE channels of DDR5 and PCIe 5.0 x128 lanes! But all that goodness costs nearly $4k for the 16 core/32 thread model. I don't see how Intel can compete w/AMD at all in the server market.
Because server loads are varied. AMD's approach works well for many but not all of them. Sapphire Rapids is much more "monolithic-like" than AMD offerings, which will impact workload scaling. I'm not in the market for current server level kit, but AMDs configuration would generally scale worse for my Prime95-like workloads than Sapphire Rapids implementation, due to internal bandwidth limiting in AMD's case.

ED, look at the 7371 EPYC Workload Affinity!
The "gaming" part? I wonder if that was for cloud gaming offerings?

If there were PCIe 5.0 videocards I'd imagine they would be capable of taking advantage of the massive memory B/W of EPYC CPUs and bus memory contention issues would be negligible when compared to a dual channel setup.
It would be a really interesting test to have a "high bandwidth" mem CPU vs consumer 3D cached offerings. IMO CPUs have been starved of memory bandwidth for a long time which is why the cache was enlarged to get around it, although to some point latency may play a role too.

Is there anyway to overclock AMD Threadripper or EPYC CPUs or their memory?
I think TR non-pro could, but it is a long time ago and I can't remember exactly. The last ones they made were Zen 2 so pretty old by current standards. Don't know if the Pro models were locked or not, but I think the server ones are.
 
I don't see how Intel can compete w/AMD at all in the server market.
Without a doubt AMD has been making a comeback in the server market ~doubling their share from Q1 21 to today. Intel has a heck of a hold though still holding a significant majority of the market (AMD had less than 20% according to an article citing Mercury Research). That said, they can certainly compete, but AMD is slowly eroding share.

ED, look at the 7371 EPYC Workload Affinity!

It's too bad the boost clocks are all so low for the EPYC line of CPUs.
I did... but a base 7000 would still best that CPU in gaming, easily I'd imagine...5Ghz clocks, notable improvement in IPC, DDR5 (FWLIW). Really, the only draw to those platforms is if your workflow can actually utilize what the platform brings. It's nice to see big numbers, but if you're good with a quarter of that and nothing else is a draw compared to modern systems. Just know what you're getting into.

If there were PCIe 5.0 videocards I'd imagine they would be capable of taking advantage of the massive memory B/W of EPYC CPUs and bus memory contention issues would be negligible when compared to a dual channel setup.
I'd imagine for HPC things/compute/etc. (which not sure how many consider that a personal PC use, or a common one). For gaming, I'm not sure it would matter. Even a 4090 only loses ~2% FPS when dropping back to PCIe 3.0 x16.

Is there anyway to overclock AMD Threadripper or EPYC CPUs or their memory?
I wouldn't bet my life on it, but mack's got it. TR non pro could, epyc could not. BUt here again it's old tech with lower IPC and slow clocks. Want to play a game and assign some cores to render something on the CPU............there ya go. LOL!
 
Back