I'm trying to put all of the instructions for an ESXI based ZFS All-In-One in one location. I will try to cite my sources as used.
My ZFS All In One consists of the following:
Hardware:
Supermicro X8SIL-F (with IPMI)
Xeon x3440 ES
4x8Gb DDR3-1333 ECC Registered (as much as you can afford)
16Gb Kingston USB Flash Drive (ESXi Install)
Hardware RAID1 - 2x250Gb SATA drives for main datastore (software raid will not be seen by ESXi)
IBM M1015 SAS/SATA controller in IT mode - passed through via vt-d to VM
(4) 4tb Hitachi 7k4000
(2) 60Gb Agility 3
Software:
ESXi 5.5 Hypervisor
OmniOS (latest stable release from their website)
Server 2K8
Ubuntu
VMs:
NAS (OmniOS)
Torrent (all legal stuff, of course)
Plex server serving multiple Roku 3
Vcenter
Other VMs as needed
http://www.napp-it.org/napp-it/all-in-one/index_en.html
http://www.napp-it.org/doc/downloads/all-in-one.pdf
http://napp-it.org/doc/ESXi-OmniOS_Installation_HOWTO_en.pdf
The PDFs have most of what you need to know, with a few things I'll add below.
Based on my hardware above, I built the OmniOS NAS VM with the following virtual hardware:
24Gb Memory
2 Virtual CPUs (1 virtual socket, two cores)
30Gb hard drive
M1015 passed through to OS
(1) e1000g (for install/setup, will remove later)
(3) vmxnet3s adapters, two to the main vswitch (one mgmt and one lan access), and one to the vswitch for the NFS share to ESX (screenshots to follow)
Floppy drive (OmniOS absolutely will not install without a floppy drive. Which you don't use for the install. Who knows. )
CD/DVD drive which you will use to mount the iso for whatever NAS software that you want to install.
The PDF has the setup instructions for the most part.
If you have a hardware raid1, I would not worry about mirroring the bootdisks. If you don't, I'd suggest two hard drives as datastores with a 30Gb hard drive on each datastore assigned to the VM (which you will mirror in your NAS software)
ESX Network Setup
I use two vswitches in ESX, one for lan/mgmt and one for NFS share
You will create the second vswitch (add networking, virtual machine, etc...., then add a vkernel to that, and you can set the ip of your nfs network here i.e. 192.168.7.x)
Omnios Network setup -
so login as root (no pw)
ipadm create-if e1000g0
ipadm create-addr -T dhcp e1000g0/dhcp
echo 'nameserver 8.8.8.8' >> /etc/resolv.conf
cp /etc/nsswitch.dns /etc/nsswitch.conf
ping 8.8.4.4 (google dns)
you should get "8.8.4.4 is alive"
now ping google.com
you should get "google.com is alive" (which means your dns and routing is working)
Install napp-it
5. install napp-it 0.9
wget -O - www.napp-it.org/nappit | perl
reboot after installation of napp-it !!
reboot
Install vmtools
Use the instructions in the ESXi-OmniOS_Installation_HOWTO_en.pdf linked to above
Once that is completed, your vmxnet3 adapters should show up when you do a
dladm show-link
You can either setup the IP addresses manually, OR you can use the nappit interface at:
http://<e1000gipaddressviadhcp>:81
of course substituting the actual IP address.
There is no password for napp-it.
Now click on System, network, and you'll have a page where you can set the IP information for your vmxnet3 adapters
Mine are set to:
10.10.10.x - management
10.10.10.x - lan access
192.168.7.x - NFS share to ESXi
now let's enable jumbo frames:
If you go back to command-line in your NAS console
FOR vmxnet3s:
vi /kernel/drv/vmxnet3s.conf
We need to change the entries for LSO and MTU. See screenshot for what it should look like here
FOR e1000g:
vi /kernel/drv/e1000g.conf
We need to change the entries for MaxFrameSize
Reboot the VM. You can do this via the ESXi console via VM, Power, "Restart Guest" now that vmtools is installed.
log back in as root, now is a good time to change the password:
"passwd root" and type your new root password in twice
do a "ndd -set /dev/vmxnet3s0 accept-jumbo 1" for each vmxnet3 adapter (in my case there are three, so I did this for vmxnet3s0, vmxnet3s1, and vmxnet3s2) THIS STEP IS CRUCIAL. They will show in napp-it as 9000 MTU but until we tell Solaris to accept jumbo frames, JF will absolutely not work.
Once that is configured, you will want to setup your pool of drives that are passed through via the M1015 card. You can do this from napp-it on the pools tab, which is very simple.
RAIDz for space, raidz2 for space with extra redundancy.
if you need I/O, only use 2 or 3 drive mirrors in sets so ZFS can stripe your data across the mirrors (fastest). The more spindles, the more I/O
Once your pool is created, create a file system for our NFS share to ESXi. Go to the ZFS Filessystems tab
"Create"
Make sure the pool you just created is listed in the "Pool" field (mine is ZFS1)
Enter the name of your ZFS Filesystem (mine is ESXi)
Change SMB Share to off
Change nbmand to off
Submit
Click the ZFS Filessystems tab again and you should be able to see your new Filessystem for NFS share
Click on "off" under the NFS column next to this file system
It will come up as sharenfs= on
Click "set property"
You are now ready to connect to ESXi
Enabling jumbo frames in ESXi
Configuration tab, networking. If you click the "properties" for the NFS vswitch that you created you will see that both "vSwitch" and "VMkernel" options (under ports) have the MTU set to 1500. We need to change this to 9000 for both. It will bark at you about no physical network adapters, and that's ok. Make both entries look like:
Mount the NFS share in ESXi. Given my pool name, ip address, and filesystem name above....
in ESXi click on "configuration" tab, and "Storage"
Click on "Add Storage"
Choose "Network File System"
Server: 192.168.7.x (this is your nas ip on the separate private network for NFS)
Folder: ZFS1/ESXi (this is caps sensitive)
Datastore Name: whatever you want. Mine is NASESXi
Next
your datastore should show up now in the list, and you can start building VMs and storing them on your NAS through ESXi.
One box, high performance, many servers!
Other tweaks:
"ipadm set-prop -p max_buf=4194304 tcp"
"ipadm set-prop -p send_buf=1048576 tcp"
"ipadm set-prop -p recv_buf=1048576 tcp"
Other notes:
Do NOT do the "upgrade virtual hardware" option if you right-click a VM. This will make it impossible to change settings in the vSphere Client, you will have to use VMware Workstation 10 (limited functionality) or vcenter server (not free)
http://blog.cyberexplorer.me/2013/03/improving-vm-to-vm-network-throughput.html
My ZFS All In One consists of the following:
Hardware:
Supermicro X8SIL-F (with IPMI)
Xeon x3440 ES
4x8Gb DDR3-1333 ECC Registered (as much as you can afford)
16Gb Kingston USB Flash Drive (ESXi Install)
Hardware RAID1 - 2x250Gb SATA drives for main datastore (software raid will not be seen by ESXi)
IBM M1015 SAS/SATA controller in IT mode - passed through via vt-d to VM
(4) 4tb Hitachi 7k4000
(2) 60Gb Agility 3
Software:
ESXi 5.5 Hypervisor
OmniOS (latest stable release from their website)
Server 2K8
Ubuntu
VMs:
NAS (OmniOS)
Torrent (all legal stuff, of course)
Plex server serving multiple Roku 3
Vcenter
Other VMs as needed
http://www.napp-it.org/napp-it/all-in-one/index_en.html
http://www.napp-it.org/doc/downloads/all-in-one.pdf
http://napp-it.org/doc/ESXi-OmniOS_Installation_HOWTO_en.pdf
The PDFs have most of what you need to know, with a few things I'll add below.
Based on my hardware above, I built the OmniOS NAS VM with the following virtual hardware:
24Gb Memory
2 Virtual CPUs (1 virtual socket, two cores)
30Gb hard drive
M1015 passed through to OS
(1) e1000g (for install/setup, will remove later)
(3) vmxnet3s adapters, two to the main vswitch (one mgmt and one lan access), and one to the vswitch for the NFS share to ESX (screenshots to follow)
Floppy drive (OmniOS absolutely will not install without a floppy drive. Which you don't use for the install. Who knows. )
CD/DVD drive which you will use to mount the iso for whatever NAS software that you want to install.
The PDF has the setup instructions for the most part.
If you have a hardware raid1, I would not worry about mirroring the bootdisks. If you don't, I'd suggest two hard drives as datastores with a 30Gb hard drive on each datastore assigned to the VM (which you will mirror in your NAS software)
ESX Network Setup
I use two vswitches in ESX, one for lan/mgmt and one for NFS share
You will create the second vswitch (add networking, virtual machine, etc...., then add a vkernel to that, and you can set the ip of your nfs network here i.e. 192.168.7.x)
Omnios Network setup -
so login as root (no pw)
ipadm create-if e1000g0
ipadm create-addr -T dhcp e1000g0/dhcp
echo 'nameserver 8.8.8.8' >> /etc/resolv.conf
cp /etc/nsswitch.dns /etc/nsswitch.conf
ping 8.8.4.4 (google dns)
you should get "8.8.4.4 is alive"
now ping google.com
you should get "google.com is alive" (which means your dns and routing is working)
Install napp-it
5. install napp-it 0.9
wget -O - www.napp-it.org/nappit | perl
reboot after installation of napp-it !!
reboot
Install vmtools
Use the instructions in the ESXi-OmniOS_Installation_HOWTO_en.pdf linked to above
Once that is completed, your vmxnet3 adapters should show up when you do a
dladm show-link
You can either setup the IP addresses manually, OR you can use the nappit interface at:
http://<e1000gipaddressviadhcp>:81
of course substituting the actual IP address.
There is no password for napp-it.
Now click on System, network, and you'll have a page where you can set the IP information for your vmxnet3 adapters
Mine are set to:
10.10.10.x - management
10.10.10.x - lan access
192.168.7.x - NFS share to ESXi
now let's enable jumbo frames:
If you go back to command-line in your NAS console
FOR vmxnet3s:
vi /kernel/drv/vmxnet3s.conf
We need to change the entries for LSO and MTU. See screenshot for what it should look like here
FOR e1000g:
vi /kernel/drv/e1000g.conf
We need to change the entries for MaxFrameSize
Reboot the VM. You can do this via the ESXi console via VM, Power, "Restart Guest" now that vmtools is installed.
log back in as root, now is a good time to change the password:
"passwd root" and type your new root password in twice
do a "ndd -set /dev/vmxnet3s0 accept-jumbo 1" for each vmxnet3 adapter (in my case there are three, so I did this for vmxnet3s0, vmxnet3s1, and vmxnet3s2) THIS STEP IS CRUCIAL. They will show in napp-it as 9000 MTU but until we tell Solaris to accept jumbo frames, JF will absolutely not work.
Once that is configured, you will want to setup your pool of drives that are passed through via the M1015 card. You can do this from napp-it on the pools tab, which is very simple.
RAIDz for space, raidz2 for space with extra redundancy.
if you need I/O, only use 2 or 3 drive mirrors in sets so ZFS can stripe your data across the mirrors (fastest). The more spindles, the more I/O
Once your pool is created, create a file system for our NFS share to ESXi. Go to the ZFS Filessystems tab
"Create"
Make sure the pool you just created is listed in the "Pool" field (mine is ZFS1)
Enter the name of your ZFS Filesystem (mine is ESXi)
Change SMB Share to off
Change nbmand to off
Submit
Click the ZFS Filessystems tab again and you should be able to see your new Filessystem for NFS share
Click on "off" under the NFS column next to this file system
It will come up as sharenfs= on
Click "set property"
You are now ready to connect to ESXi
Enabling jumbo frames in ESXi
Configuration tab, networking. If you click the "properties" for the NFS vswitch that you created you will see that both "vSwitch" and "VMkernel" options (under ports) have the MTU set to 1500. We need to change this to 9000 for both. It will bark at you about no physical network adapters, and that's ok. Make both entries look like:
Mount the NFS share in ESXi. Given my pool name, ip address, and filesystem name above....
in ESXi click on "configuration" tab, and "Storage"
Click on "Add Storage"
Choose "Network File System"
Server: 192.168.7.x (this is your nas ip on the separate private network for NFS)
Folder: ZFS1/ESXi (this is caps sensitive)
Datastore Name: whatever you want. Mine is NASESXi
Next
your datastore should show up now in the list, and you can start building VMs and storing them on your NAS through ESXi.
One box, high performance, many servers!
Other tweaks:
"ipadm set-prop -p max_buf=4194304 tcp"
"ipadm set-prop -p send_buf=1048576 tcp"
"ipadm set-prop -p recv_buf=1048576 tcp"
Other notes:
Do NOT do the "upgrade virtual hardware" option if you right-click a VM. This will make it impossible to change settings in the vSphere Client, you will have to use VMware Workstation 10 (limited functionality) or vcenter server (not free)
http://blog.cyberexplorer.me/2013/03/improving-vm-to-vm-network-throughput.html
Last edited: