• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Bifurcation for AMD CPUs suck with x870e

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Oh man, ADD moment. I swapped RAID 1 and 0 in my brain. Its RAID 0 that I'm using :)
 
Its been more of a personal experience.
Here's some quantifiable info... :)

But gains are gains, and if I can load into HLL 1 second sooner
I guess you'll get that second... in some titles. But for me, paying 2x for a second drive isn't worth that second... especially in games that I sit in a lobby and wait for it to populate (Battle Royale guy, lol).
 
How many M.2 drives do you need? I just had a look and on my B650E I got two of x4 from CPU, one each 5.0 and 4.0. And another 4.0 off chipset. I'm kinda hoping that when 5.0 becomes more common, we'll start seeing more x2 wired connectors and drives, so you have have more drives at reasonable bandwidth.

As for the claims of latency, anyone got some solid numbers for that? I would have thought bandwidth choking would be a bigger problem. You "only" have 4.0 x4 to 1st chipset, and dangling off that you have yet more bandwidth consumers off same again. Ok, maybe most people wont be moving >8GB/s via chipset at once.

It's not going to happen but I'd love to see consumer tier offerings restricted to marketing 5 tier and lower only. 7 and 9 tier should go to a spiritual successor to the old affordable HEDT era. Basically what we had before AMD broke it by throwing cores at things and not improving the surrounding platform. At least Cinebench scores look nice?
 
I've reviewed a few of these. In past generations, NZXT were rebranded/changed aesthetically, mid-range ASRock boards. I don't know if that changed with X870.

IIRC, my last was B550, so two generations ago now... but the BIOS was fully functional, just may not have been as 'pretty' as others. No idea if they updated it. I should have one of them for review, but likely won't get to it until late October.

EDIT: I have no idea how they cut it up (I don't see a block diagram in their manual)...but if you didn't like the ASRock and they still borrowed one of their SKUs, you'll more than likely have the same limitations you're concerned about from the first post.
 
Yeah I'll keep an eye on it.

I'd really like an ASUS so I can take advantage of their crazy high options of customizing the power ICs. There seems to be a lot of cool options these days with ASUS boards. A lot more since the 300 chipset days when I used them last. And the only reason I would want this feature is to push the X3D CPU to its limit at all times. But We'll see once all this hardware is in the hands of reviewers and testers.
 
Oh man, ADD moment. I swapped RAID 1 and 0 in my brain. Its RAID 0 that I'm using :)
There is barely any difference in the access time and low queue random read between RAID 0 and RAID 1. RAID 0, in a typical scenario, has a higher sequential bandwidth, which doesn't really matter in most tasks (meaning any single SSD has it high enough).

Yeah I'll keep an eye on it.

I'd really like an ASUS so I can take advantage of their crazy high options of customizing the power ICs. There seems to be a lot of cool options these days with ASUS boards. A lot more since the 300 chipset days when I used them last. And the only reason I would want this feature is to push the X3D CPU to its limit at all times. But We'll see once all this hardware is in the hands of reviewers and testers.

All brands have many new features. Even though chipsets are not so different compared to the last generation, there are other things that, for many users, are not important but, in general, improve the experience. For example, something like easy and toolless M.2 and main graphics card removal seems like nothing special, but in reality, it makes a significant difference if you replace components more often. Anyone who installed something like Noctual NH-D15 next to a large graphics card (without the space for M.2 SSD), knows what I mean.

I see many people complaining about new motherboards. I'm not saying they have the best design possible, but at the same time, they satisfy most users. Barely anyone uses additional PCIe slots, but there are still 1-2 slots on every ATX motherboard. Barely anyone uses SATA or some other devices, so most people can live with basic ITX mobos that cover 100% of their needs. Most can live with ITX ... but there is only 1 option right now, and considering there are no leaks about the date of the lower chipsets premiere, those who want a new ITX PC may move to Intel in a month.

There are many X870/X870E options, but again, we can see that AMD is the second choice for manufacturers. Intel's leaked Z890 motherboard list for the next gen CPUs is much longer. Most brands focus on Intel and release more interesting options. ASRock will probably go back to OC Formula (maybe Aqua, but I haven't seen it listed), MSI to Unify/Unify-X, ASUS will keep APEX, and even APEX Encore is already listed. Gigabyte will have its OC series, too. The whole current X870/X870E lineup looks like "gaming" only with a single ASUS "creator" option. Z890 ITX and mATX options are listed from all brands, while AMD has only one ASUS option. Things like CUDIMM will work on Intel only (at least at the beginning).
I like AMD more, but the available options suggest what we can expect and that the market still cares about Intel more.
 
I'm not sure what all boards/board partners are like, but the MSI board I looked at first (X870E Carbon) 'supports' CUDIMMs...
Supports CUDIMM, Clock Driver bypass mode only*

* CUDIMM support and POR boot frequency may vary by CPU series, with manual overclocking available after boot. Certain CPUs may fail to boot, but future BIOS updates will improve compatibility.
... but I guess what's the point if it's in clock driver bypass anyway...
 
I'm not sure what all boards/board partners are like, but the MSI board I looked at first (X870E Carbon) 'supports' CUDIMMs...

... but I guess what's the point if it's in clock driver bypass anyway...

I'm not sure how it will work, as my current knowledge is based on comments from some manufacturers (and more marketing than someone technical). One of the vendors suggested it would work, but because of support issues, it will work the same as modules without the clock controller, so around 8000-8200MT/s with 7000/9000 series CPUs and up to 8600MT/s with 8000 APUs.
Some RAM brands started CUDIMM tests a few weeks ago and don't even have proper results. Some motherboard vendors were making problems with samples for tests, but eventually sent 1-2 mobos (I assume they test it on 1-2 mobos and say it works on the whole product line). Some brands wanted CUDIMM to be validated on some motherboards without sending test samples. I don't even want to comment on that.
 
I'm not sure how it will work, as my current knowledge is based on comments from some manufacturers (and more marketing than someone technical). One of the vendors suggested it would work, but because of support issues, it will work the same as modules without the clock controller, so around 8000-8200MT/s with 7000/9000 series CPUs and up to 8600MT/s with 8000 APUs.
That's the vibe I got with MSI's statement. If it bypasses the clock gen, then is it really a CUDIMM, lol?
 
That's the vibe I got with MSI's statement. If it bypasses the clock gen, then is it really a CUDIMM, lol?

I hope it's only at the BIOS level and will be added to AMD, too. On the other hand, I don't expect it to improve general performance by more than 1% (as we already see between 6000MT/s and 8000MT/s).
I feel like most new products are "wow!", "meh", and "who cares?" at the same time.
 
haha, no doubt on the Wow/meh/who cares. I think 90% of the excitement is the anticipation, then when you have it.......................lol
 
I hope it's only at the BIOS level and will be added to AMD, too. On the other hand, I don't expect it to improve general performance by more than 1% (as we already see between 6000MT/s and 8000MT/s).
Fortunately I am not in a hurry to build but if CUDIMMs can enable affordable 8000+ I'll wait for it. If not running an APU I guess it is less important on AMD side since it'll be choked by internal CPU connectivity anyway, but it could be more interesting on Intel. I can't ignore the potential impact of >30% increase in bandwidth as finally that is something actually faster than my better DDR4 systems. (6000-6400 DDR5 is a side grade at best)
 
On new APUs, RAM doesn't matter much, too. I doubt that anyone will buy 8700G to play games, but only there we can see some improvements because of RAM. APUs are way too limited to put them on a regular and more expensive motherboard (fewer PCIe lanes, only PCIe 4.0, lower performance in general, small cache, the list is long).
I got 8700G as I noticed that prices went significantly down (30% in about half a year in some of my local stores), and I hoped I would set RAM higher. On my ASUS motherboards, it couldn't even post at more than 8400MT/s. On Gigabyte it could post at 8600MT/s. The difference in AIDA64 bandwidth was about 2GB/s between 8000 and 8600 ... and the difference between 6400 and 8000 was also about 2GB/s. The base is over 60GB/s, so it feels like there is some kind of bandwidth wall. I had the same experience with my 7800X3D, but it couldn't run with RAM at more than 8000.

RAM is generally not scaling well on AM5. It may help on Intel, but it still does not scale well above ~7200 on the 13/14th gen CPUs. CUDIMM will appear at 9200+, but if it does not help, then who will spend all that money? I doubt it will be cheap.
 
it feels like there is some kind of bandwidth wall.
I think I first noticed this in Zen 3. Due to the limited bandwidth of IF between CCD and IOD, it wasn't possible for a single CCD CPU to max out writes, although reads could get close. You'd have to use a two CCD part to get that up. Zen 4 seemed to do something weird which I never got to the bottom of. It is reported as performing higher than expected given Zen 3, and AMD said they didn't change IF bandwidth. Maybe it is something to do with different async clocks but I never looked into it. I probably should since I finally got Zen 4 earlier this year, but it is not a priority.

I don't know if APUs work much differently there but I'm assuming logically it works the same way even if it is on same silicon now.

As for CUDIMMs, I'm hoping they'll enable it to be easy to hit higher speeds without only looking at the extreme highest. For example, if it is easier to make a 8000 CUDIMM than a regular 8000 DIMM, maybe it could be cheaper even with the extra chip required. Think I saw somewhere that CUDIMM will be JEDEC standard requirement from 6400 upwards so it will have to be cheap to do in volume.
 
I think I first noticed this in Zen 3. Due to the limited bandwidth of IF between CCD and IOD, it wasn't possible for a single CCD CPU to max out writes, although reads could get close. You'd have to use a two CCD part to get that up. Zen 4 seemed to do something weird which I never got to the bottom of. It is reported as performing higher than expected given Zen 3, and AMD said they didn't change IF bandwidth. Maybe it is something to do with different async clocks but I never looked into it. I probably should since I finally got Zen 4 earlier this year, but it is not a priority.

I don't know if APUs work much differently there but I'm assuming logically it works the same way even if it is on same silicon now.

As for CUDIMMs, I'm hoping they'll enable it to be easy to hit higher speeds without only looking at the extreme highest. For example, if it is easier to make a 8000 CUDIMM than a regular 8000 DIMM, maybe it could be cheaper even with the extra chip required. Think I saw somewhere that CUDIMM will be JEDEC standard requirement from 6400 upwards so it will have to be cheap to do in volume.

I don't remember how it worked in earlier series, but AM5 motherboards have a high bandwidth and low latency modes for RAM. I noticed it sometimes works, and sometimes not. When it works, I see 4-12GB/s higher bandwidth and 5ns lower latency in AIDA64. I was talking about it with EarthDog some days ago, as I couldn't make RAM work at more than ~80/80/80GB/s and 72ns on the X870E mobo and 7950X CPU. I loaded default BIOS settings a couple of times, changed these modes, and suddenly, it started to work at ~84/94/82 GB/s and 65ns latency.
Another thing is that different samples of the same CPU like two different 7950X, can have different maximum memory bandwidth. I mean +/- 15GB/s at the same settings. There was a guy who could make 95/115/90GB/s on 2 CCD 7600X and not really maxed out RAM/IF. My 7950X is far from that, at maxed out everything.
Sometimes, it's hard to say why it's acting this way. Something is clearly wrong with these CPUs or motherboards. It's the same for every brand and various BIOS/AGESA versions.

As you said, single CCDs have limited memory read and copy. Memory write can still be bumped up with tweaking. Maybe not as high as on 2 CCD CPUs, but still +10-15GB/s is possible. However, the same as in the case I described above, sometimes write locks at a lower bandwidth.
 
Going back to the first post, what boards share lanes between M.2 and actual slots? I've not seen one, regardless of chipset ever do that. Looking at Asrock, The Taichi doesn't. Can't find deets on ASUS boards yet. But I'd imagine any maker will have similar setups. You want to leverage bifurcation, that's a limitation of the chipset and everyone's implementation as a whole and you're looking at the wrong product line for your use case it seems. You only have 28 lanes to begin with, which is to say 12 if you ignore the GPU slot. It is annoying that a lane is a lane is a lane, which is to say that a gen4 lane still counts as a gen5 lane as opposed to a 2:1 ratio. Something that could have easily been addressed with a mux. The encoding is the same, so surely a mux of some description could handle that duty.

Daisy chained chipsets and bottlenecks notwithstanding, sure, two drives on the M2_3&4 will cause an issue off anything on the first and second chipset, depending on work load, and is a different issue, and has been for this and previous chipsets. It's effectively a similar criticism I had with a previous thread of mine when I was looking at some x670e boards.
 
Last edited:
Going back to the first post, what boards share lanes between M.2 and actual slots? I've not seen one, regardless of chipset ever do that. Looking at Asrock, The Taichi doesn't. Can't find deets on ASUS boards yet.
The Taichi doesn't share lanes with the first PCIe slot, but it only supports one PCIe 5.0 SSD. The Nova has info: "If M2_5 is occupied, PCIE3 will be disabled", but everything else is the same as in the Taichi.
ASUS mobos share PCIe x16 and M.2 2/3 lanes. Gigabyte is the same. Both those two brands have most M.2 PCIe 5.0.
 
Going back to the first post, what boards share lanes between M.2 and actual slots? I've not seen one, regardless of chipset ever do that. Looking at Asrock, The Taichi doesn't. Can't find deets on ASUS boards yet.
You'll see this on X870 more than X870E as there are fewer lanes available. ASRock PG Riptide loses the second PCIe slot for M.2, for exmaple. But it happens on X870E, too.

This is actually pretty common, particularly last-gen and the lower chipsets where there are fewer lanes to go around.
 
Back