• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

What free NAS to use with ESXi? [ZFS, FreeNAS, XPEnology]

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

g0dM@n

Inactive Moderator
Joined
Sep 27, 2003
What do you guys use?

I've got an ESXi 5.5 server built and successfully doing GPU and USB passthrough. I'd like to passthrough the onboard SATA (though I haven't tested, I'm pretty sure it will work) to a VM and dedicate that VM as a NAS.

Here are the specs so far:
Antec 900 with plenty of airflow and plenty of 3.5" HDD space
AMD FX-8350 w/aftermarket cooler
32GB DDR3-1600 ECC UDIMMs
256GB SSD running VMs
800W OCZ EliteXStream PSU
GPUs: 2x 6450 + 1x 5450 Radeons; using HDMI over ethernet
USB Cards: 2x USB 3.0 + 1x USB 2.0 (although over ethernet is 1.1)
Additional NIC: 2x Gigabit (3 total with onboard NIC)

I have a Synology DS212j with a Raid-1 3TB setup with Seagate Constellation drives. I also have two more of those 3TB drives in my current server/HTPC. I have a total of 4 of them I could theoretically dump into this ESXi box, but honestly I do love the Synology and worried about touching it.
 
I have used both Openfiler and FreeNAS inside VMWare with physical drives mapped to the virtual runnig those offerings.

I followed this guide in order to get that going.
 
What do you guys use?

I've got an ESXi 5.5 server built and successfully doing GPU and USB passthrough. I'd like to passthrough the onboard SATA (though I haven't tested, I'm pretty sure it will work) to a VM and dedicate that VM as a NAS.

Here are the specs so far:
Antec 900 with plenty of airflow and plenty of 3.5" HDD space
AMD FX-8350 w/aftermarket cooler
32GB DDR3-1600 ECC UDIMMs
256GB SSD running VMs
800W OCZ EliteXStream PSU
GPUs: 2x 6450 + 1x 5450 Radeons; using HDMI over ethernet
USB Cards: 2x USB 3.0 + 1x USB 2.0 (although over ethernet is 1.1)
Additional NIC: 2x Gigabit (3 total with onboard NIC)

I have a Synology DS212j with a Raid-1 3TB setup with Seagate Constellation drives. I also have two more of those 3TB drives in my current server/HTPC. I have a total of 4 of them I could theoretically dump into this ESXi box, but honestly I do love the Synology and worried about touching it.

OmniOS would be my suggestion. You can download a prebuilt vm from the napp-it webpage (which you should be using if you use any solaris based OS). Or you can install it yourself, knowing that it HAS to have a floppy device in the virtual machine. I usually nuked that when I built VMs then wondered why OmniOS wouldn't install.

You'll want two NICs for the VM, I'd say e1000 for LAN and then vmxnet3 for 10G NFS to your ESXi box. This would allow you to move VMs to your SAN VM.

http://www.napp-it.org/downloads/index_en.html
 
Thanks, guys. I will reference this thread for when I find the time to toy with this.

The one thing I didn't think about when beginning this setup is that I'm going to be stuck with USB 1.1 on the ethernet lines. That's fine for KB and Mouse, but to plug in a flash drive I have to come up with another plan... The USB 2.0 over ethernet is way too expensive.
 
I've used both FreeNAS (back in 2008-ish), and Openfiler until last year. Currently I'm using a linux container running Debian inspired by the Turnkey File Server. Openfiler is very robust and powerful, but updates are few and far between these days. I ran Openfiler on the physical machine for almost a year. When I got my virtualization set up, I set up Openfiler in a VM passing the physical disks to that guest. Now, the host maintains the mdadm, and I just pass the lvm partition to the container instead. I have been running XFS since the jump, haven't gotten into ZFS yet.
 
I'm going to try napp-it. I've already started the download of it.

What do you think of my using the onboard SATA and passing it through to the OmniOS VM? I'd buy an HBA but then I'd have to give up another card in the box... if it's definitely worth having an HBA, I'll do it. Someone had recommended the IBM 1015 to me, which can be had for around $100 on eBay.

Another concern of mine is what do I do if the OmniOS VM bites the dust or if the board bites the dust, or if the HBA bites the dust? I'm planning on going with a Raid-5 setup. I have 4x 3TB ES2 drives, so I'm not going for Raid-6. I can back up the very important data from the Raid-5 to a different medium and take my chances on the rest of the data is what I'm thinking...
 
I'm going to try napp-it. I've already started the download of it.

What do you think of my using the onboard SATA and passing it through to the OmniOS VM? I'd buy an HBA but then I'd have to give up another card in the box... if it's definitely worth having an HBA, I'll do it. Someone had recommended the IBM 1015 to me, which can be had for around $100 on eBay.

If you need a controller, the m1015 with it firmware is the best for the money. As far as the onboard SATA, you CAN do that if the motherboard allows it, and as long as you understand every drive attached to that onboard sata will now be available for VM use only.

g0dM@n said:
Another concern of mine is what do I do if the OmniOS VM bites the dust or if the board bites the dust, or if the HBA bites the dust? I'm planning on going with a Raid-5 setup. I have 4x 3TB ES2 drives, so I'm not going for Raid-6. I can back up the very important data from the Raid-5 to a different medium and take my chances on the rest of the data is what I'm thinking...

It's kind of like mdadm raid in that instance, you just attach the disks to another system and you're good. If you have a VM with a M1015 passed through, you could remove that M1015, attach to local motherboard, install omnios on that setup, and import your array. Very easy.

If i/o is really important, you'd want two vdev mirrors of two disks a piece. raidz (raid5) though should be fine for what you're doing.
 
If you need a controller, the m1015 with it firmware is the best for the money. As far as the onboard SATA, you CAN do that if the motherboard allows it, and as long as you understand every drive attached to that onboard sata will now be available for VM use only.



It's kind of like mdadm raid in that instance, you just attach the disks to another system and you're good. If you have a VM with a M1015 passed through, you could remove that M1015, attach to local motherboard, install omnios on that setup, and import your array. Very easy.

If i/o is really important, you'd want two vdev mirrors of two disks a piece. raidz (raid5) though should be fine for what you're doing.
Awesome post.
So I see only one problem with onboard SATA then. If I use onboard SATA for the Raid-5 volume on OmniOS, I won't be able to also have an SSD on an onboard SATA for ESXi to offer as a datastore to my VMs.

I'm currently running the VMs on SSD, and they are FAST!!

I have a dual NIC taking up a slot. I installed it so that I can have the onboard NIC and two more NICs for added bandwidth. I assume it's overkill and just the onboard NIC should suffice? What are your thoughts on that? If I don't need the dual NIC, well there you have it... a free slot for an HBA.
 
Didn't see the motherboard in the original post, but unless your onboard NIC is an Intel I wouldn't bother. Also I'm not a user of VMware at home, but I heard they are picky about NICs and most onboard ones (re: Realtek) don't work 100%.
 
Awesome post.
So I see only one problem with onboard SATA then. If I use onboard SATA for the Raid-5 volume on OmniOS, I won't be able to also have an SSD on an onboard SATA for ESXi to offer as a datastore to my VMs.

I'm currently running the VMs on SSD, and they are FAST!!

Correct. I think there was one particular AMD board that allowed one onboard port to work for esxi and the rest to be passed through, but it was specific to that one motherboard.

g0dM@n said:
I have a dual NIC taking up a slot. I installed it so that I can have the onboard NIC and two more NICs for added bandwidth. I assume it's overkill and just the onboard NIC should suffice? What are your thoughts on that? If I don't need the dual NIC, well there you have it... a free slot for an HBA.

Didn't see the motherboard in the original post, but unless your onboard NIC is an Intel I wouldn't bother. Also I'm not a user of VMware at home, but I heard they are picky about NICs and most onboard ones (re: Realtek) don't work 100%.

The thing about onboard NICs is kind of a wives tale, it's not been my experience. A few NICs went away in the latest version of ESXI 5.5 but it's fairly easy to add them back in.

You probably won't need more than a single nic, and if you had to add another card and you have a pci slot, intel's dual-port pro/1000 MT isn't a bad option
 
Didn't see the motherboard in the original post, but unless your onboard NIC is an Intel I wouldn't bother. Also I'm not a user of VMware at home, but I heard they are picky about NICs and most onboard ones (re: Realtek) don't work 100%.
My onboard NIC works. I later added the dual NIC, thinking it may be useful to do NIC Teaming and us a virtual switch for added bandwidth.
Correct. I think there was one particular AMD board that allowed one onboard port to work for esxi and the rest to be passed through, but it was specific to that one motherboard.

The thing about onboard NICs is kind of a wives tale, it's not been my experience. A few NICs went away in the latest version of ESXI 5.5 but it's fairly easy to add them back in.

You probably won't need more than a single nic, and if you had to add another card and you have a pci slot, intel's dual-port pro/1000 MT isn't a bad option

Well, hopefully I can do that with the onboard SATA... that would rock! I don't care about needing to buy additional cards... it's just that I have a limited number of slots, and would like to keep everything on a single server. I don't want two powerhouse servers running 24/7, chewing up my expensive electricity! My energy bill is outrageous already in NY.

You're right... the more I think about it, a single NIC should suffice. It's just that I'd like to present this Raid-5 as a volume to my physical desktops as well. I may just settle with only 2 physicalized VMs from this box, rather than 3 (I use two slots per physicalized VM, one for GPU and one for USB).

This thread has been awesome. Thanks guys for everything so far!
 
Downloaded, extracted, imported, added to inventory, powered up, and accounts configured. Next is to migrate all of my data on my 4x Seagate ES2 3TB drives so that I can pop them into this server.

I'm replacing the current applications for these drives and rebuilding the RAIDs they're in with some Toshiba 5900rpm 3TB drives. I'm doing a swap per day, so hopefully this weekend I can get moving on this new project. SO EXCITED!!! Let's hope I can salvage onboard SATA and keep one SATA for the SSD. Let's hope that passing through the SATA doesn't kill my entire datastore, though...
 
Downloaded, extracted, imported, added to inventory, powered up, and accounts configured. Next is to migrate all of my data on my 4x Seagate ES2 3TB drives so that I can pop them into this server.

I'm replacing the current applications for these drives and rebuilding the RAIDs they're in with some Toshiba 5900rpm 3TB drives. I'm doing a swap per day, so hopefully this weekend I can get moving on this new project. SO EXCITED!!! Let's hope I can salvage onboard SATA and keep one SATA for the SSD. Let's hope that passing through the SATA doesn't kill my entire datastore, though...

It won't, you'll just find that your datastore disappears after configuring onboard for passthrough. Disable passthrough, reboot, your datastore will be fine.

I'm actually using an SSD mirror through my NAS, and considering purchasing a raid controller to use the ssd array as a raw datastore rather than through a NFS share from my NAS.
 
It won't, you'll just find that your datastore disappears after configuring onboard for passthrough. Disable passthrough, reboot, your datastore will be fine.

I'm actually using an SSD mirror through my NAS, and considering purchasing a raid controller to use the ssd array as a raw datastore rather than through a NFS share from my NAS.

Oh right... I'm an idiot. I forgot I had ESXi on a flash drive haha.

If you use a raid controller, are you expecting much better performance? Have you benchmarked your current setup?
 
Oh right... I'm an idiot. I forgot I had ESXi on a flash drive haha.

If you use a raid controller, are you expecting much better performance? Have you benchmarked your current setup?

I see about 220MB/s writes to my SSD drives, which I'd like to see higher and I think the fact that I'm running them through an NFS share as opposed to natively to ESXi is my problem. But knowing me, I can't just get a cheap raid controller. I want a good one, lots of cache, bbu, etc... so trying to find a good deal for a basic controller that'll run raid1 (soon to be raid10) for my 256Gb SSD drives.
 
I see about 220MB/s writes to my SSD drives, which I'd like to see higher and I think the fact that I'm running them through an NFS share as opposed to natively to ESXi is my problem. But knowing me, I can't just get a cheap raid controller. I want a good one, lots of cache, bbu, etc... so trying to find a good deal for a basic controller that'll run raid1 (soon to be raid10) for my 256Gb SSD drives.

How high do you expect write speed to be, though, in Raid-1. I thought Raid-1 doesn't benefit much in write, but it does in read speeds since it has two to read from.
 
How high do you expect write speed to be, though, in Raid-1. I thought Raid-1 doesn't benefit much in write, but it does in read speeds since it has two to read from.

If my SSDs are rated at 500, I don't like when they run at 225. Maybe it wouldn't change anything, maybe it would. It would be interesting to see their raided performance outside of passthrough to see if it really improved.
 
If my SSDs are rated at 500, I don't like when they run at 225. Maybe it wouldn't change anything, maybe it would. It would be interesting to see their raided performance outside of passthrough to see if it really improved.

I see. Report back, please, if you don't mind. You have my attention ;)
 
I've got my 4x 3TB ES2 drives installed, napp-it VMX added to inventory, passed through my onboard AHCI/SATA controller, but OmniOS doesn't see them. It's kicking back errors ever since I added the AHCI and SATA passthrough. I'm wondering if there's a need to inject drivers, so I'm trying to figure that out:
mDNSResponder ERROR: getOptRdata - unknown opt 4
 
Back