• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Bifurcation for AMD CPUs suck with x870e

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Dolk

I once overclocked an Intel
Joined
Mar 3, 2008
I've already looked at Asrock and ASUS top line x870e motherboards, and their PCIe distribution is horrendous. Majority of the good M2s share lanes with the main GPU, tragic. With two x870es on the motherboard, most choose to add in higher density of USB rather than get their M.2s adjusted correctly. The worst is when they put the majority of the M.2s on the second x870e. Oh yes, please add another PCIe hop for my data, we all really like that. Y'all only had to put ALL the M.2 @ gen 4 from the first x870e than put the rest of the USB on the second one. But nah, lets just continue to make simple cheap products that follow each other, and that boast about server grade PCIe quality.

/rant
 
I agree to an extent...

However....
Majority of the good M2s share lanes with the main GPU, tragic.
Why is this tragic? From a performance standpoint, it's a negligible loss from 5.0 x16 to 5.0 x8 or 4.0 x16.

The worst is when they put the majority of the M.2s on the second x870e. Oh yes, please add another PCIe hop for my data, we all really like that.
How does that effect us in the real world?
 
Why does this have to effect others but me? :) I'm complaining for my own wants.

I want all the bandwidth for my GPU, and I want the lowest latency to my stored data.
 
Why does this have to effect others but me? :) I'm complaining for my own wants.
That's cool... but say that/qualify the rant/statement. You're not wrong in what you're saying, it just doesn't affect/bother the overwhelming majority and they should be aware of that, too. :)

So, what does putting the M.2 on the '2nd' chipset actually do (limit performance of Gen4 drives?)? I'd guess latency increases, but what does that mean for you? The rest of us?

EDIT: Here... we're past the embargo...(for everyone's reference)


x870e.jpg

x870.jpg
 
Last edited:
Just latency increase. The first x870e will have to bridge the PCIe. That means a Host to Target <-> Target to Host scenario. Depending on what is being read, and if a misshit occurs, this could lead to some unwanted latency hits. Especially as we move into data streaming games more than we have.

My ideal setup is to have the HOST on the CPU's M.2, while 2-3 M.2s in RAID are on the chipset. The HOST calls to the bridge stay consistent and only have to fight with the USB perfs to push/pull data.

This forum is all about getting the most out of your PC. People should consider how they setup their PCIe along with their memory timings :)
 
This forum is all about getting the most out of your PC. People should consider how they setup their PCIe along with their memory timings :)
Of course! But we can do nothing about that (hence your rant), specifically (right?). The best we can do is put the fastest drives in the CPU-connected socket(s) and any slower ones on the chipset-connected devices. 5.0 devices on the chipset will be limited to 4.0 speeds anyway since it's only 4.0 x4 'DMI'.

I like what you're thinking though... put the M.2 storage 'closer' to the host to minimize latency. But OTOH, what about users with fast USB storage (me!) who wants the most out of it?
 
USB storage!? Why!

Use lightning if you are going to do that :p

5.0 M.2 will be overkill of course, but 5.0 GPU I think will be needed. I think UE5 will still be the pusher for these requirements. Games are moving into this realm where they have to do accelerated compute calculations within the image frame time. Data that can be streamed from storage will be done, which will free up the RAM for said computations. We are finally entering an era of distributed compute, which means low latency data access is required. The more you can optimize your storage config, the more performance you can gain in realistic workloads.
 
USB storage!? Why!
It's a tiny external enclosure with a PCIe 4x4 M.2 drive in it (20 Gbps Type-C connectivity)... I can take it with me to work anywhere and have all my data and everything needed to complete reviews.

5.0 M.2 will be overkill of course, but 5.0 GPU I think will be needed.
People said that about PCIe 3.0 and 4.0 as well. So far, no card has come out that shows more than a negligible difference between x16 and x8 on the same protocol. Maybe the 5090 changes that... but I'm not holding my breath or harboring any concerns over it.

I agree. I have a Hyper M.2 card, and it would be nice to use more than 2 drives on it.
How would that work, Freeagent? We'd have to 'regress' back to the days of the chip that adds PCIe lanes (to have 2x 4.0 x16 slots) which, IIRC, added latency anyway (perhaps you don't care about latency, lol).


I think it really comes down to not being able to please everyone, I suppose. :)



But yeah, 2x PCIe 5.0 M2s on the CPU, the rest on the chipset for X870E...(see images above with chipset diags).

........X870 though, now THAT is the bastard child of X870E. Last gen it took teh "B" chipsets to drop to one Prom21 chip (chipset) and lose lanes, now, X870 has one chipset........THAT is more of a tragedy, IMO.
 
How would that work, Freeagent? We'd have to 'regress' back to the days of the chip that adds PCIe lanes (to have 2x 4.0 x16 slots) which, IIRC, added latency anyway (perhaps you don't care about latency, lol).
Not sure, I am not an engineer. I care about latency, but maybe not wrapped up in it like some others..
 
Not sure, I am not an engineer. I care about latency, but maybe not wrapped up in it like some others..
I mean, we could deep dive into it... but, as I understand it, your case is a bit different than what Dolk is describing(?). You seem to need/want a SECOND full electrical x16 slot so you can run more (4, 4.0x4 total) off an expansion card...

... or maybe you're saying you want those 5.0 x8 to break down to 4x 4.0 x4. Correct me if I'm wrong (dolk) but that adds complexity and cost to already expensive offerings.

I see what you're saying. :)
 
The way PCIe lanes are used was the main reason why I picked GB B650E Master. It was quite unique in the last generation, and now many other motherboards distribute PCIe lanes in a similar way. I understand that my needs are different from those of other users, but also, 90%+ other users will have a single graphics card and 1, max 2 SSDs, when many motherboards don't have to share PCIe lanes for 1-2 M.2 with the main PCIe slot. Sadly, typically, one PCIe 5.0 and one PCIe 4.0 aren't sharing the bandwidth, but it's perfect for most who use 2 M.2 SSDs as barely anyone spends money on two highly overpriced PCIe 5.0 SSDs.

RAID on M.2 is pointless unless you want to make a RAID1/10. RAID 10 will always use CPU+chipset (unless you put a 4x SSD card in the first PCIe x16 slot) and RAID1 doesn't really matter what it uses and writes are cached and reads are fast enough. Even if you count latency then you can't see it in a home/office environment.

I'm not saying that the new motherboards are perfect, as they are far from that ... but how many users will it affect? I wonder more why ITX motherboards still have M.2 PCIe 5.0+4.0 and not 5.0+5.0 when there are enough PCIe lanes for even 3x 5.0 x4. There are many stupid designs ... ehm, external audio on ASUS mobos.
I'm also a bit lost in the "native" USB4. It's so native that a soldered ASMedia chip is on the PCB.
 
Yeah mainstream mobos are all about pleasing a very wide gap of people. I'll look into the mini ATX and see what they are doing. Could be a better fit.

Another thing I have noticed is that it seems everyone is adding a PCIe switch on the main GPU PCIe port. Crazy to think about. Those things can add significant delay during high bandwidth activities. I've seen some of these be labeled as retimers, which helps performance but adds latency hits regardless of whats on the PCIe branch. Just adds unnecessary cost for trying to fit too many customer needs.

Adding more complexity with PCIe costs more money due to switches / retimers. Those parts aren't cheap (think $5s to $10s to $20s depending on 100k/500k/1mil units purchase).
 
It would be nice if there was a big pool that you could dip into, where lanes that are not being used can be used any way that you would like.. but things do not work like that..
 
The way PCIe lanes are used was the main reason why I picked GB B650E Master. It was quite unique in the last generation, and now many other motherboards distribute PCIe lanes in a similar way. I understand that my needs are different from those of other users, but also, 90%+ other users will have a single graphics card and 1, max 2 SSDs, when many motherboards don't have to share PCIe lanes for 1-2 M.2 with the main PCIe slot. Sadly, typically, one PCIe 5.0 and one PCIe 4.0 aren't sharing the bandwidth, but it's perfect for most who use 2 M.2 SSDs as barely anyone spends money on two highly overpriced PCIe 5.0 SSDs.

RAID on M.2 is pointless unless you want to make a RAID1/10. RAID 10 will always use CPU+chipset (unless you put a 4x SSD card in the first PCIe x16 slot) and RAID1 doesn't really matter what it uses and writes are cached and reads are fast enough. Even if you count latency then you can't see it in a home/office environment.

I'm not saying that the new motherboards are perfect, as they are far from that ... but how many users will it affect? I wonder more why ITX motherboards still have M.2 PCIe 5.0+4.0 and not 5.0+5.0 when there are enough PCIe lanes for even 3x 5.0 x4. There are many stupid designs ... ehm, external audio on ASUS mobos.
I'm also a bit lost in the "native" USB4. It's so native that a soldered ASMedia chip is on the PCB.
This is why I did the Aurora Masters x570 too. It put the M.2s in a good position so that it doesn't siphon from PCIe slots, but SATA. Much better trade off.

As for raiding M.2s I found that its been super powerful for gaming. I do RAID 1 with MS Windows (which I know isn't amazing), the solution works super well.
 
As for raiding M.2s I found that its been super powerful for gaming. I do RAID 1 with MS Windows (which I know isn't amazing), the solution works super well.
Can you quantify what 'super powerful for gaming' means for storage R1 M.2 storage? Last I've seen there are plenty of tests showing storage barely makes a difference (loading and FPS), even from SATA SSD to 5.0 NVMe it's negliglble. Is this about Direct Storage or streaming games, specifically?

(thanks for the info... genuinely asking questions here :) )
 
Yeah mainstream mobos are all about pleasing a very wide gap of people. I'll look into the mini ATX and see what they are doing. Could be a better fit.

Another thing I have noticed is that it seems everyone is adding a PCIe switch on the main GPU PCIe port. Crazy to think about. Those things can add significant delay during high bandwidth activities. I've seen some of these be labeled as retimers, which helps performance but adds latency hits regardless of whats on the PCIe branch. Just adds unnecessary cost for trying to fit too many customer needs.

Good luck finding any mATX mobo. If I'm right, there is only one ITX model, and everything else is ATX. ASRock, MSI, and Gigabyte have everything ATX has so far. ASUS has Strix X870I Gaming, which is with an external audio, and I simply hate it. If you want a small mobo for a small PC, then you don't want anything external to take up space.

I see no difference in performance between B650, X670E, and X870E ... +/- 1% in tests. I have one X870E mobo in tests right now (the review should be ready soon).

As for raiding M.2s I found that its been super powerful for gaming. I do RAID 1 with MS Windows (which I know isn't amazing), the solution works super well.

Games base mainly on access time and low queue random read. In RAID, the access time is worse than on a single SSD and random read is or the same or worse than on a single SSD. I don't know how you noticed any improvement in games because of RAID.

On AMD, it's better to use Windows dynamic volumes as you can move the array to any other PC, and after reactivating it, it will work. However, if the AMD RAID driver crashes, then good luck with your data. I had too many problems with AMD RAID in the past. It also performs the same as the Windows solution. The only downside is that you can't boot from dynamic volumes. On the other hand, it's better to have a single SSD for booting. Single SSDs for OS booting are also more often used on servers.
 
@EarthDog
Its been more of a personal experience. I haven't recorded enough data to quantify the actual gain in performance. I know that my limited testing early on, I always saw a nice increase in speed during gaming and benching. My bet is that benching tools probably show a higher degree of gains over games. I agree that its probably very little gains.

But gains are gains, and if I can load into HLL 1 second sooner than the rest of the people than I'm already better than them (/s).

@Woomack
I thought random read AND continuous read improved under RAID 1. I choose M.2s that had highest performance in random read abilities (ADATA XPG? something...). I'll have to do a better job of gathering data while tuning this next build.

Also just looked at the ITX from ASUS. Its a good start, but could use another M.2.
Post magically merged:

It would be nice if there was a big pool that you could dip into, where lanes that are not being used can be used any way that you would like.. but things do not work like that..
Hopefully the next thing we get in the future ;) Realtime configurable parts is what I'd like to see next. Always been something that could happen, but its typically too advanced for the majority of customers and is very expensive to implement. But could be required in some markets to enable pleasing a wider audience. FPGAs or dedicated ASICs would be used for this.
 
@Woomack
I thought random read AND continuous read improved under RAID 1. I choose M.2s that had highest performance in random read abilities (ADATA XPG? something...). I'll have to do a better job of gathering data while tuning this next build.

Also just looked at the ITX from ASUS. Its a good start, but could use another M.2.

RAID 1 = the same or worse performance than a single SSD. Some SSDs perform 2x better in sequential read but not write. I noticed that on some Kingston SSDs like DC500M.
The main problem with RAID is that it uses a non-optimized driver. These drivers are quite old and add latency.
The best option for gaming right now is a single drive with Direct Storage support (even though not official, it works on most new series).

I have a Strix B650E-I Gaming, and for me, it feels like a better mobo than the X870-I Gaming. The B650E lacks USB4, but at least everything else is integrated and uses the same M.2 type in both sockets. Most ITX mobos support 2x M.2 SSD, but in all cases it's M.2 PCIe 5.0+4.0 or 4.0+4.0. Older gen ITX mobos had up to 3x M.2 PCIe 4.0.
I have two Minisforum mobos. One with soldered R9 7945HX and one with i9 13900HX. AMD mobo supports 2x M.2 5.0, Intel mobo supports 4x M.2 4.0. The only problem with the AMD option is that the manufacturer doesn't care to add RAID option in BIOS, and generally their support is pretty bad so there were 2 BIOS releases and I don't think there will be more. Either way, if such a small brand can make 2x M.2 PCIe 5.0 or 4x M.2 PCIe 4.0 then I can't get why the large brands don't even try. ASRock had X299E-ITX with 3x M.2, but it was ages ago... it was a nice mobo in general, up to 18 core xeon and 4x memory slots.
 
Back