• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

ASUS Z690-E + 4090 help please.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Rothschild

Registered
Joined
Sep 30, 2021
Hi all.

Please bare with me.
I have an ASUS ROG Z690-E with an Intel 12900K. Its main M.2 slot ( labelled as M.2_1 ) & the main x16 PCI-X slot are Gen 5.
However when that main x16 PCI-X Gen 5 slot is occupied, which I have a GPU installed in there, & the main M.2 Gen 5 slot is also occupied simultaneously, which I have an M.2 installed in, then this forces the main x16 PCI-X Gen 5 to run at x8 PCI-X Gen 5, which would be equal to x16 PCI-X Gen 4. My GPU is max x16 PCI-X Gen 4 anyways so no problem.

But I'm a bit confused. GPU-Z says I'm running at only x8 PCI-X Gen 4, & lets me know I'm not running at full potential speed, which I've attached a screenshot of.
My BIOS screenshot shows where I tell it what Gen to run in (the "PCIEX16(G5)" in the pic is my main 16x PCI-X Gen 5 slot I'm referring to) .
I set this setting to Gen 4 first but when I saw GPU-Z saying I'm only running at x8 PCI-X Gen 4, I then changed it to Gen 5, but GPU-Z still reports the same x8 PCI-X Gen 4 connection either way.

How do I get it to assign me my 8x PCI-X Gen 5//16x PCI-X Gen 4 ?? Or is there a nice way to find out or run a test to get what actual speed/Gen my GPU is running on other than just GPU-Z ??
The MB manual simply says "1 x PCIe 5.0 x16 slot == When M.2_1 is occupied with SSD, PCIEX16(G5) will run x8 mode only".

Appreciate your time!
 

Attachments

  • ASUS GPU-Z.png
    ASUS GPU-Z.png
    143.4 KB · Views: 9
  • 11.jpg
    11.jpg
    2.4 MB · Views: 9
The graphics card is PCIE 4.0, so it won't run at 5.0 mode = GPU-Z shows PCIe 4.0 correctly. There are no PCIe 5.0 cards yet.
As you noticed, Intel motherboards run at PCIE x8 when M.2 or the second PCIe slot are occupied. Everything is fine, it just looks weird.
I also highly doubt you would see the difference if it was running at x16.
 
The graphics card is PCIE 4.0, so it won't run at 5.0 mode = GPU-Z shows PCIe 4.0 correctly. There are no PCIe 5.0 cards yet.
As you noticed, Intel motherboards run at PCIE x8 when M.2 or the second PCIe slot are occupied. Everything is fine, it just looks weird.
I also highly doubt you would see the difference if it was running at x16.
This!

Calling @Evilsizer ... this is for you. I know we talked in a previous thread what would happen in this situation. I thought I tested it right and shared it would still show x16, but I tested it wrong (m.2 wasn't in 5.0 slot, so it didn't break down) when I gave my answer. Sorry!!
 
the manual is right, when both are populated they will run at 8x each since there are only 16 pcie 5.0 lanes on the cpu. you also didnt say what m.2 drive you are running. this is purely speculation a this point now, if the drive is pcie 5.0 then it could be limited due to he gpu. without me having such a setup i have no way of testing it. on the flip side if your is pcie 4.0 then your bios readings are correct for the pcie gen type being used. if your m.2 drive is in fact 4.0 move it to another slot. either use the m.2_2 for only the dedicated 4.0 buss so the gpu gets its full x16 lanes.

this is what i was looking for but not in the manual, from asus spec page. this means what you are seeing is normal, till the cpu/pch has more gen 5 pcie lanes. here again if the m.2 drive is a 4.0 drive, move it to the m.2_2 or m.2_3 slots.
*** When ROG Hyper M.2 card is installed on PCIEX16(G5), only Hyper M.2_1 slot can support PCIe 4.0 x4 mode. When ROG Hyper M.2 card is installed on PCIEX16(G3), only Hyper M.2_1 slot can support PCIe 3.0 x4 mode. When ROG Hyper M.2 card is installed on PCIEX16(G4), Hyper M.2_1 and Hyper M.2_2 slots can support PCIe 4.0 x4 mode.

**** When ROG Hyper M.2 card is installed on PCIEX16(G5) or PCIEX16(G3), Hyper M.2_2 slot will be disabled. When ROG Hyper M.2 card is installed on PCIEX16(G4), Hyper M.2_1 and Hyper M.2_2 slots can support PCIe 4.0 x4 mode.
[/end]


as to the gpu though we really don't how much the lane width will effect the new 4000's from NV. it is always worth testing since you have it setup that way. then swap over the drive to another m.2 slot run the benchmarks again on the gpu. the small bump in cuda cores from a 3080 to a 4080 does not account for the 50-60% increase in texture rate, FP16, FP32, and FP64. the only big jump that happened was from a 2090 TI in FP32 only nearly doubled going to a 3080 card. granted this is no the best choice but it still stands. The 3080 has double the cuda cores and the rest of the specs besided the FP32 are that much higher. i still stand by we do not know how much the lane width will effect the new 4000's or the 7900xtx's. maybe someone in the crew might be putting something together for the front page. *hint hint*
 
Last edited:
Wait...

...there's a "Gen 5" now??

So far full marketing as there are no PCIe 5 devices in stores ... yet. AMD is somehow better in PCIe 5 support and PCIe lanes distribution.

the manual is right, when both are populated they will run at 8x each since there are only 16 pcie 5.0 lanes on the cpu. you also didnt say what m.2 drive you are running. this is purely speculation a this point now, if the drive is pcie 5.0 then it could be limited due to he gpu. without me having such a setup i have no way of testing it. on the flip side if your is pcie 4.0 then your bios readings are correct for the pcie gen type being used. if your m.2 drive is in fact 4.0 move it to another slot. either use the m.2_2 for only the dedicated 4.0 buss so the gpu gets its full x16 lanes.

this is what i was looking for but not in the manual, from asus spec page. this means what you are seeing is normal, till the cpu/pch has more gen 5 pcie lanes. here again if the m.2 drive is a 4.0 drive, move it to the m.2_2 or m.2_3 slots.

It doesn't really matter what device is in the M.2 socket as the motherboard architecture causes that the remaining lanes won't be in use. I mean, no matter if in PCIe 5.0 M.2 socket will be installed PCIe 3.0, 4.0 or 5.0 SSD (5.0 are not available yet), it will make the first PCIe slot to run at x8 and the remaining lanes won't be used as next slots have already specified bandwidth/signed lanes. At least this is how ASUS/ASRock/MSI motherboards with last Intel chipsets work.
In most cases it depends on the motherboard's design as some Gigabyte motherboards (like AORUS Master series) run at full speed/x16 in the first PCIe slot, even if M.2 sockets are occupied. However, the second slot is shared with M.2 and if you use the 2nd PCIe slot then some of the M.2 sockets won't work at all (or the second PCIe slot is locked at x4 then all M.2 will be still available).

As you said, the best way to solve this issue is not to use the first M.2 socket but move it to 2/3/4, which are connected to the PCH.

ROG Hyper M.2 card is an exception but it still takes as many lanes as a graphics card. I mean for the full speed, it needs x16 slot (4x 4-lane M.2 sockets). This is why ASUS recommends to use it in the first PCIe slot or use it in the second slot, but without a graphics card (IGP or x4 card). Optimal was to use it on X299 motherboards (but then was only PCIe 3.0 version) with multiple x16 slots.
In short, Hyper M.2 card on motherboards like B550-XE is a failed idea.

Here is an article of how PCIe bandwidth affects the performance on the RTX4090 ... well, at least in popular games.
 
HI, thanks for your replies. Please note I'm well aware of no GPUs being Gen 5 yet & that my CPU has max x16 Gen 5 lanes which I should of already mentioned.

Also, just to note I'm not using the M.2 Hyper card for this scenario. I have no Gen 5 products, the main M.2 I have plugged into the "M.2_1" slot is a Gen 4. This can be understandably confusing because the Hyper cards M.2 slots are also named the same as the MB, only difference being that the word "Hyper" is used, for example "Hyper M.2_1" & "Hyper M.2_2".
Just to be clear for those not familiar with this M.2 Hyper card that it only supports up to Gen 4 on both of its M.2 slots.

this is purely speculation a this point now, if the drive is pcie 5.0 then it could be limited due to he gpu. without me having such a setup i have no way of testing it. on the flip side if your is pcie 4.0 then your bios readings are correct for the pcie gen type being used. if your m.2 drive is in fact 4.0 move it to another slot. either use the m.2_2 for only the dedicated 4.0 buss so the gpu gets its full x16 lanes.

Sorry for the confusion but that pic of my BIOS PCI-X Gen settings is for me to set/change & not just a reading, which I have tried setting that BIOS "PCIEX16(G5)" Gen setting to both Gen 4 & Gen 5 which gave the same results in GPU-Z, no difference.
If the MB's M.2_1 slot is occupied with a Gen 5 M.2, or any Gen speed for that matter, its speed would not be limited as it takes 1st priority, whether there's a GPU plugged in or not. It's only the GPU that gets nerfed/limited to x8. Even if an M.2 Gen 3 was slotted into the MB's M.2_1 then the GPU would still be limited to x8.

Just to add, I'm already using all three M.2 slots on my MB & can no longer use the Hyper card.

As you noticed, Intel motherboards run at PCIE x8 when M.2 or the second PCIe slot are occupied. Everything is fine, it just looks weird.
I also highly doubt you would see the difference if it was running at x16.

I just thought/assumed that when Intel says "1 x PCIe 5.0 x16 slot == When M.2_1 is occupied with SSD, PCIEX16(G5) will run x8 mode only", that the x8 they're referring to would be assigning me bandwidth equivalent to x8 Gen 5 to my GPU which would be same bandwidth amount as x16 Gen 4 & we'd all be happy, but I'm wrong there? So in other words, theoretically, if I DID have a Gen 5 GPU plugged in rather than my Gen 4 GPU with "M.2_1" slot occupied, would that be the only way the GPU would see/use x8 Gen 5 ??


Thanks for your help all.
 
This is why ASUS recommends to use it in the first PCIe slot or use it in the second slot, but without a graphics card (IGP or x4 card). Optimal was to use it on X299 motherboards (but then was only PCIe 3.0 version) with multiple x16 slots.
In short, Hyper M.2 card on motherboards like B550-XE is a failed idea.

Hi Woomack.

Actually for the Z690-E Hyper C ard, ASUS recommends using the last/bottom x16 slot [(PCIEX16(G4)], & is the only way the Hyper Card works at its full potential which is both its connected M.2 drives working at Gen 4 speeds. I had it running this way for a while until recently I got a 4090 which forced me to lose the middle x16 slot where I had my Thunderbolt card plugged in, which now I've had to use the last/bottom x16 slot for my Thunderbolt card & no more Hyper Card.

Bah, I might have to sacrifice my Thunderbolt for a while until I get a MB with built-in Thunderbolt, which I really didn't want to as I have a high-end audio device using Thunderbolt as its best connection with least latency & max # of channels etc.
 
Last edited:

Damn man I really thought my GPU would just automatically be seeing x8 Gen 5 worth of bandwidth which would be equivalent to x16 Gen 4, what a stupid thing to assume come to think of it :(
Nothing's ever that simple.
 
I did the same thing... and I even 'tested' it..

... the problem with my test is that I didn't put a M.2 module in a 5.0 socket so I didn't see the lanes break down, lol....
 
I did the same thing... and I even 'tested' it..

... the problem with my test is that I didn't put a M.2 module in a 5.0 socket so I didn't see the lanes break down, lol....
lol well that makes me feel a bit better now, thanks.
 
But yeah, that's exactly where I went... I was wondering if they physically cut the lanes to reduce the bandwidth or through MUX chips, was the bandwidth spread out so it would show 4.0 x16.... lol. Seems like however it's done, you don't see x16 again no matter the PCIe version.
 
But yeah, that's exactly where I went... I was wondering if they physically cut the lanes to reduce the bandwidth or through MUX chips, was the bandwidth spread out so it would show 4.0 x16.... lol. Seems like however it's done, you don't see x16 again no matter the PCIe version.
Yea & also I do realize that I won't see/notice much difference if any between Gen 4 x8 vs x16 going by some tests/benchmarks people have done but it's more the principle & just knowing my GPU is connected at its max advertised speed you know? I feel dirty now.
 
Ahh so my speculation (I don't think it's even valid to call it a theory at this point) is that a PCIe "lane" represents a physical interconnect between an add in card (PCIe or M.2) and either the system (either CPU or PCH). Meanwhile the PCIe generation reflects the standard by which data travels this physical interconnect. Metaphorically, you can run a normal train on a high speed rail track, but you can't run a high speed train on a low speed track. When you populate the M.2 slot in question, those physical traces are turned on, and 8 of the traces to the GPU are turned off (I've only heard of NVMe drives running x4, so I don't know if that M.2 slot can run x8 or if it's just that way because everything is in multiples of 4). This would be like a switch in the train tracks. The device in question only runs Gen4, it is unable to use the Gen5 worth of bandwidth because it only has access to the x8 traces, even though it is capable of using that much bandwidth in x16. The GPU can't communicate using the Gen5 protocol on the x8 lanes it has available.

Also I'm still drinking my coffee so feel free to throw things at me if this doesn't make sense.
 
Ahh so my speculation (I don't think it's even valid to call it a theory at this point) is that a PCIe "lane" represents a physical interconnect between an add in card (PCIe or M.2) and either the system (either CPU or PCH). Meanwhile the PCIe generation reflects the standard by which data travels this physical interconnect. Metaphorically, you can run a normal train on a high speed rail track, but you can't run a high speed train on a low speed track. When you populate the M.2 slot in question, those physical traces are turned on, and 8 of the traces to the GPU are turned off (I've only heard of NVMe drives running x4, so I don't know if that M.2 slot can run x8 or if it's just that way because everything is in multiples of 4). This would be like a switch in the train tracks. The device in question only runs Gen4, it is unable to use the Gen5 worth of bandwidth because it only has access to the x8 traces, even though it is capable of using that much bandwidth in x16. The GPU can't communicate using the Gen5 protocol on the x8 lanes it has available.

Also I'm still drinking my coffee so feel free to throw things at me if this doesn't make sense.
Initially, my thought was that MUX chips did it and just moved bandwidth allocation (instead of lanes). The way it works, yes... it seems like it's not making x8 lanes available 'physically'. And because it's 4.0 x16 card and only x8 is available (regardless of PCIe type) they will run at 4.0 x8 speeds.

I just can't believe how I derped my testing..... :chair: :rofl:
 
i wish i knew what to look for to see if it is MUX chips being used to split up the lanes. though something like that doesnt make sense it would add a slight delay if you will in the path. i personally think they have it setup directly in the cpu somehow, that if. even the m.2 only needs say 2 lanes that only 8 lanes will be allowed to the gpu.

i would be neat if down the road, like some server boards. you could use a plug in model that would allow more m.2 cards to be plugged in. say you didnt on the actually bw they have in full x4 4.0 or even x4 5.0 setups. say the card provided 2 lanes to each m.2 drive for a consumer raid setup. that was would pretty cool and fast.
 
that if. even the m.2 only needs say 2 lanes that only 8 lanes will be allowed to the gpu.
Correct. It's 8x gone no matter what to route the lanes.

There are boards with 4x 5.0 setups. AMD ones... no AIC needed (though some with more have AICs for 5.0). You can also RAID NVMe from the board (most boards).
 
Correct. It's 8x gone no matter what to route the lanes.

There are boards with 4x 5.0 setups. AMD ones... no AIC needed (though some with more have AICs for 5.0). You can also RAID NVMe from the board (most boards).
yea saw linus video with servers sent to him from supermicro. 50 5.0 lanes, with custom little slot and a addon try you can get for m.2 drives. he thinks they might be planning on something like 1 lane 5.0 for m.2 that need less bw needed but more backup storage. granted there are larger 3.5in drives but if the aim is smaller server for backups, m.2 is perfect.

i would like to see something setup like that for consumer pc's. using one x16 slots but 2 lanes per m.2 for 8 drives total. i never went looking if something like that exists.
 
Back