• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

What bandwidth should I see with gigabit ?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

HeXenViRii

Member
Joined
May 11, 2002
Location
Right near the Beach, FL
Im using a small gigabit network to fire things between me and my server . My main rig uses the onboard gigabit nic ( msi k8n neo2 ) , adn the server uses a zonet gigabit card i bought from newegg (the one with the MIR) . I originally had a cat5e xover cable i made running to them , and when i ran the 2 programs i used to test bandwidth (NetCPS , Sandra) i couldnt get more then 14M/sec between the 2 and Sandra would give me all kinds of numbers everytime i tested , most came in the 2000k/sec range and a 3% packet loss note . After redoing the cable ends and retesting i got the same results . Fearing i or the cable was at fault i bought a cat6 xover cable and strung it up between the 2 ; same results . I did notice that if i ran netcps over and over it would gain about another 1M/sec but thats it .... Is that the max speed of a gigabit connection ? The server is a Abit KT7a-Raid , with 1.5gb or sdram , and a Tbred b 133x12.5 , WD 8mb 120gb ... is it not fast enough to handle any more then that ??
 
Last edited:
14MB/sec is low for just benchmarking.

The only thing I can think of that you haven't tried is checking to make sure that both network cards are set to use jumbo frames (9000byte frames).
 
Smokeys said:
14MB/sec is low for just benchmarking.

The only thing I can think of that you haven't tried is checking to make sure that both network cards are set to use jumbo frames (9000byte frames).

Ok , i checked and they were disabled so i set them to 9000 , but performance didnt improve . I found the latest drivers and installed them and speed increased to the high 18's , but thats it .
 
Kind of basic, but never hurts to mention it... Have you checked that both NICs are not set to autonegotiate? If they are try locking the speed to gig/full. I generally get about a gByte / min on broadcom server NICs.
 
Shazbot said:
Kind of basic, but never hurts to mention it... Have you checked that both NICs are not set to autonegotiate? If they are try locking the speed to gig/full. I generally get about a gByte / min on broadcom server NICs.

yes they are both on auto , i set them to gigabit full duplex and network speed dropped to .2M/sec . What about these other controls , 'flow control' offload checksum' & 'offload tcp largesend' ?
 
Erg. How about half duplex? I'm thinking for crossovers at gig speed that might be better. The other settings are good for tweaking, but you've got something more fundemental holding you back.

Does the connection start out fine and then gradually slow down, or does it not hit high speed even at the very start?
 
Have you thought about your wiring? I thought Cat6 was the best for Gigabit lans. From the articles I've read it would be worth the upgrade.
 
Some one give me a clue. I thought that ethernet had about a 25% overhead. That would make the possible throughput at about 750 Mb/s or 93.75 MB/s. First, is that right? If so, you really have something slowing you down. Perhaps turn off NETBIOS.

*edit* ^^^^ I think that if you re-read the thread, he did try Cat 6. At any rate, to my knowledge, Cat 5 and Cat 5e will also do Gigabit.

Don't forget to fold for team 32 (OC.com)

Fold on. Build, Borg, Recruit.
 
The overhead for IP varies with the type of traffic (e.g. packets have a fixed length header, so 10 packets with 1 byte payloads take up a lot more bandwidth then 1 packet with 50 bytes of payload), but yeah, 25% - 35% is a good working number.

At fast speeds the send / recieve window sizes also start to have an impact (the recieving machine tells the sending machine to 'slow down' when it's buffer fills, a small buffer can cause IP to choke). If either of the machines was ever registry-tweaked for dialup that could be holding them back now. Assuming drivers and hardware are not the problem, you might try looking at the registry tweaks here.

I'll check a couple of servers and see what kind of throughput they get with netcps across a gig switch for comparison. I'll post the results later on.
 
FWIW -

Just tested a few different connections and got between 36mB/sec and 88mB/sec on gig links using broadcom links over Nortel gigabit switches. The disparity is probably a result of a bunch over overnight scripts running between the servers.
 
Unless I am mistaken, your server is trying to use a 1000Mb/s card on a 133Mb/s PCI bus. And 133Mb/s = 16-17MB/s so again, unless I am mistaken that all sounds about right.
 
Back