• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

ESXi networking: Multipath? 802.3ad? Trunking? Help

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Mpegger

Member
Joined
Nov 28, 2001
I'm trying to wrap my head around all the different network setups I could use with my ESXi box and am still confused on alot, mainly because all the information I find is written for a enterprise environment, and not a home LAN.

What I'm trying to do is setup my ESXi box to make best use of the 4 port Intel NIC card. I have 2 more ports on the motherboard that will be used as a WAN port for my firewall and ESXi management port, so that traffic is covered.

I want to use the 4 ports on the Intel card together in a way that ESXi and my Dell 5224 managed switch can utilize the NICs and bandwidth in a more balanced manner, so no single NIC port is saturated and holding up the rest of the network/PCs. I also want to setup a proper virtual network for the NFS and iSCSI targets that will be used by ESXi and various VMs on the same box.

So far all I understand for the NIC teaming part, is that I will need to setup a vSwitch in ESXi with the 4 NIC ports bonded to it with the IP Hash setting. The Dell 5224 managed switch I have to set the specific ports I plan to connect the 4 port nic to in Gigabit mode, and here is where I get confused. I have not found anything specific to the Dell 5224, but what I have seen is that I need to enable Trunking, or 802.3ad, or both, along with possibly some other settings that may or may not be needed.... :confused:
Can anyone clear this part up for me?

NFS/iSCSI is another matter. I do not know if a single dedicated vNIC in a VM (ESXi included) is enough for NFS and iSCSI duties or if multiple vNICs along with multiple vSwitches are best. I read that when network usage happens within the ESXi box, that it will happen at a much higher rate as it does not go through any part of the network itself (vNICs or vSwitch). I don't know the specifics, but ESXi essentially bypasses the network portion and directly copies memory addresses from one VM to another. This should mean that a VM with a NFS/iSCSI target to a VM on the same machine, shouldn't need more then 1 nic for the NFS/iSCSI. But so far I have not found any concrete answer on this, as the articles I've read deal with NFS/iSCSI target and VM linked to the target, being on seperate physical machines.
Anyone know the answer to this? :confused:

Depending on the answer above, I'll have even more questions about multipathing/MPIO with NFS/iSCSI, as it seems to be another issue that is dealt with in a enterprise environment in a couple of different manners.
 
To setup 802.3ad in the 5224, go to System -> Link Aggregation -> Trunk Settings. From here, you want to click the LACP box on each port that will be bonded. Hit apply and you are done.

powerconnect1.png

Regarding your question for network traffic within the box, most hypervisors are smart enough to not send traffic out to the switch, which will have obvious speed benefits. I would venture to guess that if everything is running off the same box, there would be no need to have more than one virtual switch. However, I don't know ESXi at all, so I can't give you an answer.
 
Configured correctly and local (not over VNC, RDP, etc) to the box you SHOULD not have local traffic going external of that box. The only time I have ever seen it happen was via VNC and/or misconfigured.

Are you planning to have a VM running Pfsense or similar? If so, you might want to look at having 2 NIC assigned for it (1x WAN, 1x LAN and not 1 being part of the team. I know that PFsense wont put LAN traffic outside the LAN typically), 1x NIC for Management (you could even use a single port cheapo 10/100 NIC for WAN or Management as long as it's supported by your Firewall OS and ESX) and then the other 3-4 Teamed together doing the heavy lifting.

Im playing catch up but I am assuming you are MOSTLY planning on LAN traffic being the largest user of bandwidth? If so, I will be very surprised and REALLY curious if you make it even close to 50% utilization of a 4 NIC team...
 
Configured correctly and local (not over VNC, RDP, etc) to the box you SHOULD not have local traffic going external of that box. The only time I have ever seen it happen was via VNC and/or misconfigured.
So it should stay within the ESXi host. I guess there is no need for multiple vNICs for the NFS/iSCSI traffic between the VMs, since it should use all the internal bandwidth ESXi is capable of (I think it was somewhere around a 20Gb link bandwidth) with only the one dedicated vNIC, correct?

Are you planning to have a VM running Pfsense or similar? If so, you might want to look at having 2 NIC assigned for it (1x WAN, 1x LAN and not 1 being part of the team. I know that PFsense wont put LAN traffic outside the LAN typically), 1x NIC for Management (you could even use a single port cheapo 10/100 NIC for WAN or Management as long as it's supported by your Firewall OS and ESX) and then the other 3-4 Teamed together doing the heavy lifting.
I had Astaro configured in this manner, but this time around had planned to assign it only 1 NIC for the WAN and a LAN vNIC with all the other VMs on the vSwitch. I don't think security should be an issue here as like you said, LAN traffic should not even go through the firewall, and the redundant connections to the network with the NIC Teaming would be a plus. Also, no need to dedicate 2 NIC ports to a single VM. Any reason not to do so?

Im playing catch up but I am assuming you are MOSTLY planning on LAN traffic being the largest user of bandwidth? If so, I will be very surprised and REALLY curious if you make it even close to 50% utilization of a 4 NIC team...
I won't make it close to that. It's mainly myself with all my systems that will consume most of the bandwidth, mainly for backups and redundancy between systems. So at most it would saturate a single Gigabit link. But without some kind of load balancing, this would affect the rest of the computers on the network since it's mainly 1 of the VMs acting as a file/media/HDTV server for the home.
 
Last edited:
Back