• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

When 2.5G isn't 2.5G?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

mackerel

Member
Joined
Mar 7, 2008
I finally upgraded my home network so that the two major nodes are both on 2.5Gbe unmanaged switches. Both PCs have 2.5Gbe NICs, and cabling is either cat5e or cat6 without double checking. Note I ran things more than once to check, presenting a single result for each case to keep things simple.

Crabaletta - unmanaged switch - unmanaged switch - Garuda

I downloaded and ran iperf3 as server on Garuda. Testing is carried out from Crabaletta.

Before that, a quick observation in case it might or might not be related. The 2nd switch was only 1Gbe and its replacement 2.5Gbe model arrived today. I took the chance to do before and after tests. Seeing as there was an ipv4/ipv6 option, I tried both. What I saw surprised me:

ipv6
[ 5] 0.00-10.00 sec 1.09 GBytes 939 Mbits/sec sender
[ 5] 0.00-10.02 sec 1.09 GBytes 936 Mbits/sec receiver

ipv4
[ 5] 0.00-10.01 sec 1.11 GBytes 953 Mbits/sec sender
[ 5] 0.00-10.03 sec 1.11 GBytes 949 Mbits/sec receiver

ipv4 was about 1.5% faster than ipv6. Maybe this could be due to the extra overhead of the bigger address space, if nothing else has changed? Also, I am assuming this is measuring payload at whatever layer IP sits on, which is different than the raw media rate which also has to handle packet metadata.

Never mind that, onto the new switch. I simply unplugged everything from the old one, and plugged in the new one. Simples. The whole link between these two systems should be 2.5Gbe all the way though now.

ipv6
[ 5] 0.00-10.01 sec 2.73 GBytes 2.34 Gbits/sec sender
[ 5] 0.00-10.02 sec 2.73 GBytes 2.34 Gbits/sec receiver

ipv4
[ 5] 0.00-10.01 sec 2.03 GBytes 1.74 Gbits/sec sender
[ 5] 0.00-10.02 sec 2.03 GBytes 1.74 Gbits/sec receiver

Erm, what now? The ipv4 rates are obviously a lot lower than expected. What other options do I have to tinker with?

ipv4 - reverse (Garuda > Crabaletta)
[ 5] 0.00-10.01 sec 2.76 GBytes 2.37 Gbits/sec sender
[ 5] 0.00-10.01 sec 2.76 GBytes 2.37 Gbits/sec receiver

So it is directional? Could Windows QoS be interfering? I turned it off on Crabaletta and...

ipv4 - QoS disabled on Crabaletta (also no change if I disable on Garuda)
[ 5] 0.00-10.01 sec 2.05 GBytes 1.76 Gbits/sec sender
[ 5] 0.00-10.02 sec 2.05 GBytes 1.76 Gbits/sec receiver

ipv4 - -P 2 (two connections) QoS still disabled on Crabaletta
[ 5] 0.00-10.01 sec 1.02 GBytes 873 Mbits/sec sender
[ 5] 0.00-10.03 sec 1.02 GBytes 870 Mbits/sec receiver
[ 7] 0.00-10.01 sec 1.01 GBytes 870 Mbits/sec sender
[ 7] 0.00-10.03 sec 1.01 GBytes 867 Mbits/sec receiver
[SUM] 0.00-10.01 sec 2.03 GBytes 1.74 Gbits/sec sender
[SUM] 0.00-10.03 sec 2.03 GBytes 1.74 Gbits/sec receiver

No change. That's where I'm at. Any thoughts?
 
If you have Intel 225/226 NICs then it can be your problem as they dislike some switches/routers and act this way in some connections or lock at 1Gbps max.
It's just a quick thought as it can be something different.
Most connections are at ~85% of max theoretical throughput, which you should get in tests. 5GbE and 10GbE look worse.
 
Crabaletta has I225-V. Garuda has some Realtek chip or other. Whatever was cheapest at the time on Amazon/ebay. I also tried my Unraid box, also Realtek, same results.

The higher rate at >90% seems reasonable, especially if it is payload data as opposed to media data since there will be overhead from networking stuff. The lower rate works out about 70%.
 
I'm using the TamoSoft benchmark for LAN tests. It's free if you wish to check it.
I also like Realtek NICs much more than Intel. There are no problems, and drivers are in every modern OS. However, for some Intel cards, it's hard to find drivers for Server 2022/2025.
Intel NICs don't work with my QNAP switch at more than 1Gbps. It's a 10+2.5Gbps mix switch. They also cause problems with some Netgear models.
Again, I don't know if Intel NICs is your problem.
 
The "problem" seems to be resolved by updating the network driver. I can now get same bandwidth in both directions via ipv6 and ipv4 between my two desktops. Sending to my NAS from either desktop is still a little slower than expected, but that's Linux and a battle I'll deal with another day. The thing is, the previous driver wasn't that old since I would have used latest when I built this system earlier this year.

If it makes any difference, the Intel NIC goes to a QNAP switch, and the two Realtek systems are currently on a Zyxel switch.
 
I didn't think of updating my driver, I thought win update would have handled it.

I paid for 2.5 for 3 months before I went back to the regular 1Gb package.

Literally nothing changed speed wise for me except upload, have the new hardware from the CC too.

Was using my Realtek port.. still am.
 
Back