• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

What free NAS to use with ESXi? [ZFS, FreeNAS, XPEnology]

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Can you set a dhcp reservation on your router? I normally do that in case I need to blow away an OS and rebuild a VM



It will have to be hardware raid for ESX to see it as one drive. Software raid will still appear as separate drives for ESX. Been there, done that, cursed much.

I have dual L5630, my plex VM has 8 vcpu. Pegs them all when watching a 12Gb 1080p movie. Of course, I have Plex set to "make my cpu hurt" for transcoding.

It is horribly easy to overprovision vcpu and vmem, but you probably already know that. I always start small and add IF I can prove to myself that I need to.

Hmmmm....
So I can't use the onboard raid-1. I definitely don't want to purchase another raid controller... not that I can't afford it, but I don't want to give up another card as I'm trying to physicalize some VMs and need the slots. Now I see why you said this...
Generally a hardware raid1 for your datastore is preferred, which is where you'd store the NAS VM. One thing you can do is install two identical drives for datastores and mirror your drives. So you'd install two drives, build your NAS VM on drive1 with a 20gb drive, then add a 20Gb drive on drive2 for the same VM, then mirror the drives in your NAS OS.

I will have to try that after all.
So you're suggesting to install 2x SSDs and present them how to ESX? How would I emulate the redundancy exactly? I have ideas in my head, but not sure of the exact picture you're trying to paint for me. Could you walk me through it, Professor? :)
 
Can you set a dhcp reservation on your router? I normally do that in case I need to blow away an OS and rebuild a VM

At the end I made that. For some reason when I set static IP to everything and turned off DHCP I couldn't log into FreeNAS via web browser. Anyway I've reserved adresses for PCs and NAS. Now all seem fine.
 
At the end I made that. For some reason when I set static IP to everything and turned off DHCP I couldn't log into FreeNAS via web browser. Anyway I've reserved adresses for PCs and NAS. Now all seem fine.

I leave DHCP on with my router in case of emergencies. It saved me a lot of time once with my ESX server... I'd messed with the VLAN settings and lost connectivity. Since I had no dedicated GPU, I couldn't access via console without opening up the server and moving things around (annoying to do)... so I plugged in all ethernet ports on the server, and DHCP saved me... got into management and fixed her up.
 
Hmmmm....
So I can't use the onboard raid-1. I definitely don't want to purchase another raid controller... not that I can't afford it, but I don't want to give up another card as I'm trying to physicalize some VMs and need the slots. Now I see why you said this...
Generally a hardware raid1 for your datastore is preferred, which is where you'd store the NAS VM. One thing you can do is install two identical drives for datastores and mirror your drives. So you'd install two drives, build your NAS VM on drive1 with a 20gb drive, then add a 20Gb drive on drive2 for the same VM, then mirror the drives in your NAS OS.

I will have to try that after all.
So you're suggesting to install 2x SSDs and present them how to ESX? How would I emulate the redundancy exactly? I have ideas in my head, but not sure of the exact picture you're trying to paint for me. Could you walk me through it, Professor? :)

I think the easier option is an SSD ZFS pool. But you can present both drives as datastore1 and datastore2, create VMs on each, and add two drives to each VM, one on datastore1, one on datastore2. Yeah if a drive fails you may lose your VMX file but you'll still have all your data.

But put it on a ZFS pool, and you have that redundancy.
 
I think the easier option is an SSD ZFS pool. But you can present both drives as datastore1 and datastore2, create VMs on each, and add two drives to each VM, one on datastore1, one on datastore2. Yeah if a drive fails you may lose your VMX file but you'll still have all your data.

But put it on a ZFS pool, and you have that redundancy.

I guess I have to toy with Napp-It to fully understand, then.
I'm going to Home Depot tonight to buy some screws so that I can pop in the 5th and 6th 3TB drives. For some reason, I cannot find my Antec 900 HDD screws (they need to be longer 8-32 screws).
 
I guess I have to toy with Napp-It to fully understand, then.
I'm going to Home Depot tonight to buy some screws so that I can pop in the 5th and 6th 3TB drives. For some reason, I cannot find my Antec 900 HDD screws (they need to be longer 8-32 screws).

I would say the best method would be to attach your SSD drives to your VM, created a mirrored pool, and present that via NFS share on a separate network to your ESX box.
 
Mpegger said:
Any VM on a SSD as the datastore(s) directly connected/accessed by the mobo/ESXi OS, will make a huge difference compared to running the VM on the ZFS file system itself (whether via NFS or iSCSI), even if you add on a SSD as a ZIL drive. SSD ZIL drive helps a little, but no where near as much as the VM on a SSD. It's like going from a latop HDD, to a SSD. The difference is very noticeable in nearly everything, if you plan on running your VMs off the ZFS storage.

I went ahead and tested building a zpool with just a single SSD inside a virtualized NAS. Benchmarks are on par with single disk speeds outside of a virtualized NAS. Therefore, your statement about a "huge difference compared to running the VM on the ZFS file system itself" is either out of date or completely inaccurate, when it comes to an SSD as local datastore for ESXi compared to a passed through ZFS datastore via NFS to ESXi.
 
I went ahead and tested building a zpool with just a single SSD inside a virtualized NAS. Benchmarks are on par with single disk speeds outside of a virtualized NAS. Therefore, your statement about a "huge difference compared to running the VM on the ZFS file system itself" is either out of date or completely inaccurate, when it comes to an SSD as local datastore for ESXi compared to a passed through ZFS datastore via NFS to ESXi.

Guess I should have been clearer in what I meant, which was running the VM in the ZFS datapool that the person has created for their storage, which as the OP stated, would be a RAID-z2 (just like I run myself and you pointed out in a earlier response).
 
Guys, my goal is this:
  • Massive storage on spinning disk, likely Raid-Z2, for pics, videos, movies, docs, files, etc. Right now I have 6x 3TB 7200rpm enterprise-class drives for this
  • SSDs for VMs, but I would like redundancy and have 2x 256GB SSDs to use. If I need to, if it'll help, I could even throw in another SSD.

If it's best to just run the ZFS OS/VM on a flash drive, I'd do that... right now ESXi is on flash. I'd buy 2x 32GB flash drives if it makes things simpler. Then, I could just dump in the 2x SSDs into a zpool as Raid-1 and host it to my other VMs that way.
 
If it's best to just run the ZFS OS/VM on a flash drive, I'd do that... right now ESXi is on flash. I'd buy 2x 32GB flash drives if it makes things simpler. Then, I could just dump in the 2x SSDs into a zpool as Raid-1 and host it to my other VMs that way.

You're going to want your VM on something other than flash I would say, this is why you see alot of either hardware raid1 volumes or two local disks on which you would mirror your NAS VM install drive.
 
You're going to want your VM on something other than flash I would say, this is why you see alot of either hardware raid1 volumes or two local disks on which you would mirror your NAS VM install drive.

I think I'm going to try adding both SSDs as datastores like you mentioned and see if I can then mirror them through napp-it. The problem is my PSU died and I'm waiting on my RMA. The server is down right now. It's a good thing I didn't put that sucker into "production" at home yet. This is also why I'm going to keep my Synology NAS as a backup. That thing is solid, and if I was to lose this ESX host, I'd be in for some real hurtin' with the wife if I didn't have a backup plan.
 
I think I'm going to try adding both SSDs as datastores like you mentioned and see if I can then mirror them through napp-it. The problem is my PSU died and I'm waiting on my RMA. The server is down right now. It's a good thing I didn't put that sucker into "production" at home yet. This is also why I'm going to keep my Synology NAS as a backup. That thing is solid, and if I was to lose this ESX host, I'd be in for some real hurtin' with the wife if I didn't have a backup plan.

napp-it is for changes inside your NAS VM. So if you want to mirror them in any fashion other than hardware raid1, it will have to be inside your NAS VM. I would create two NFS networks then, one for your traffic to mirrored SSD and one to your raidz2 array.
 
napp-it is for changes inside your NAS VM. So if you want to mirror them in any fashion other than hardware raid1, it will have to be inside your NAS VM. I would create two NFS networks then, one for your traffic to mirrored SSD and one to your raidz2 array.

Make fun of me if you will, but I'm totally lost. I see it as:
  1. ESX runs off of a flash drive (will have to later figure out a mirror for this too)
  2. Install 6x 3TB drives and 2x SSDs
  3. Add one SSD as a datastore to install napp-it on
  4. Create Raid-Z2 with 6x 3TB drives and set up volumes as I like (not for VMs, for straight data/multimedia use)
  5. Now I want VMs on the SSDs, mirrored, and also want the napp-it mirrored somehow... here's where I'm still lost (you're welcome to make fun)
  6. How would I create these two NFS networks: E1000 NIC for internet/physical LAN access for the napp-it VM, and then two VMXNet3 NICs for two separate NFS access?

I may be stupid here b/c I haven't had the chance to get my feet wet. I just ordered a Seasonic modular PSU from NewEgg that should arrive tomorrow, as I'm sick of waiting for the beast to get into action. Physically, everything else is already installed.
 
Make fun of me if you will, but I'm totally lost. I see it as:
  1. ESX runs off of a flash drive (will have to later figure out a mirror for this too)
  2. Install 6x 3TB drives and 2x SSDs
  3. Add one SSD as a datastore to install napp-it on
  4. Create Raid-Z2 with 6x 3TB drives and set up volumes as I like (not for VMs, for straight data/multimedia use)
  5. Now I want VMs on the SSDs, mirrored, and also want the napp-it mirrored somehow... here's where I'm still lost (you're welcome to make fun)
  6. How would I create these two NFS networks: E1000 NIC for internet/physical LAN access for the napp-it VM, and then two VMXNet3 NICs for two separate NFS access?

I may be stupid here b/c I haven't had the chance to get my feet wet. I just ordered a Seasonic modular PSU from NewEgg that should arrive tomorrow, as I'm sick of waiting for the beast to get into action. Physically, everything else is already installed.

A month from now it won't seem so complicated. Your path as I see it:
Install ESX to flash drive.
Configure networks as discussed (I will answer this separately)
Clone flash drive to keep one as spare (I have never, ever had a flash drive fail in an all-in-one, fwiw *knock on wood)
Add local SSD drive as datastore
Install OmniOS, FreeNAS, OI, etc.... to new VM stored on datastore SSD (you could add another SSD and mirror the boot drive for your NAS VM if you want)
Have ESXi use remainder of SSD for host swap.
Install napp-it for web-based management
Setup RAIDz2 (spinning disks) and disk mirror (SSD)
Attach ESX to your RAIDz2 via NFS (separate network like 10.1.1.x)
Attach ESX to your mirror via NFS (separate network like 10.2.1.x)
Create VMs on your mirrored drives


Ideally you would have:
2x USB flash drives (cloned, one as a backup maybe taped to the server somewhere)
2x hard drives in hardware RAID1 as main datastore
2x SSD in mirror served via NFS to your ESX box for fast VMs

Now you may not really need more than your SSDs for VM storage, at which point you wouldn't attach to your large spinning disks array. You'd just access this from networked clients via tcp through your network. So maybe you just need the one array with mirrored SSDs for your VMs.

One reason I usually attach to a larger ZFS pool is for a WHS, with it's drive limitations and storage requirements. I always have a WHS VM backing up the other VMs that I have that are Windows based. Since this would be low writes after initial backup it could be on the raidz2 if you want.
 
c dub, from the sound of the setup you recommended, I'll have to use 4x SSDs. 2x SSDs for the NAS VM, and 2x SSDs for my other VMs via NFS.

I still am confused with the part of how to mirror the NAS VM. If I add the first SSD as a datastore and run a VM off of it, what am I using to mirror it? Someone previously mentioned that if I set up raid-1 via the onboard sata/raid that ESX won't see it as one mirrored device.

I was trying to be cheap and run 2x SSDs that'll house the NAS VM and all other VMs. If having 4x SSDs is the only possibility (or much better option), then I'll find some small SSDs, but I'm still confused as to how I'd mirror the NAS VM if I'm installing it to a single SSD first.
 
c dub, from the sound of the setup you recommended, I'll have to use 4x SSDs. 2x SSDs for the NAS VM, and 2x SSDs for my other VMs via NFS.

I still am confused with the part of how to mirror the NAS VM. If I add the first SSD as a datastore and run a VM off of it, what am I using to mirror it? Someone previously mentioned that if I set up raid-1 via the onboard sata/raid that ESX won't see it as one mirrored device.

I was trying to be cheap and run 2x SSDs that'll house the NAS VM and all other VMs. If having 4x SSDs is the only possibility (or much better option), then I'll find some small SSDs, but I'm still confused as to how I'd mirror the NAS VM if I'm installing it to a single SSD first.

You'll create your VM on your primary datastore, we'll call it SSD1. You'll add another hard drive to that VM, but instead of storing it in the same place you'll store it on SSD2. Then once your NAS VM is up, napp-it has a mirror boot device feature that will mirror the boot drive to SSD2. If SSD1 were to fail, you'd have to recreate the VM itself, then just add that drive that you mirrored and all of your settings/nas info will be there already.

Think of it as a dynamic mirror like you would do through windows.
 
You'll create your VM on your primary datastore, we'll call it SSD1. You'll add another hard drive to that VM, but instead of storing it in the same place you'll store it on SSD2. Then once your NAS VM is up, napp-it has a mirror boot device feature that will mirror the boot drive to SSD2. If SSD1 were to fail, you'd have to recreate the VM itself, then just add that drive that you mirrored and all of your settings/nas info will be there already.

Think of it as a dynamic mirror like you would do through windows.

Perfect!! Makes total sense now. OK, so as I was hoping I understood, napp-it will do mirroring.

Good, so the rest of my VMs can sit on the SSDs and I'll figure out another method to back them up, maybe to the spinning disks. It'll be a manual process to bring things back up, but at least the VMDKs would be good to go.
 
If you have a spare HDD, and create your VMs with minimal space requirements, you can just copy the VMs over to the HDD. All my VMs easily fit on a 500GB HDD with plenty of space to spare. Just setup the drive to be used by ESXi as another datastore. You can use Veeem Backup & FastSCP for copying VMs on your ESXi box between drives or even over your network. It's faster then doing it from within ESXi's VSphere client, though it does require you to shutdown the VM being copied. There is also ghettoVCB that you can use to run backups, and a frontend for it that will allow you to automate it.
 
Veeem I've heard of. Instead of having another HDD powered up in there I may just create a small volume from the raid-z2 just for VM backups.
 
Veeem I've heard of. Instead of having another HDD powered up in there I may just create a small volume from the raid-z2 just for VM backups.

If the SSD with your ZFS OS goes down, how will you access the ZFS volume without a backup drive to run the VM that will allow you to retrieve the backup? ;)

You could go the hot swap route and just disconnect the backup drive when not needed. Or (though I do not know if you can do this), I think ESXi can use USB devices as storage (not to run off, but at least for backups). A HDD in a USB enclosure should allow you to disconnect/power it down when you're not backing up.
 
Back