• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

What free NAS to use with ESXi? [ZFS, FreeNAS, XPEnology]

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Finally it's up.
Napp-it is running alongside some of my VMs on an SSD from the onboard SATA on my board. I then have an IBM M1015 passed through to the Napp-it VM and I created a RaidZ1 pool with 4x 3tb es2 drives.

Now I'm trying to figure out what block sizes are best for VMs, and what's best for multimedia, etc.

I love that I have thin provisioned options and especially DEDUP!!! Dedup is awesome... I use it at work with NetApp.
 
Sorry for little offtopic but I have a question and don't want to make a new thread ... if ZFS on 2 drives is safe or better is to make ZFS mirror in this case ? I mean in case if one drive fails I wish to have a chance to recover that data but I guess I need more drives for this fuctionality.
I'm planning to make 2x2TB NAS on a board without any RAID controller and only 2x SATA ports. Will probably use FreeNAS on USB stick.
 
Sorry for little offtopic but I have a question and don't want to make a new thread ... if ZFS on 2 drives is safe or better is to make ZFS mirror in this case ? I mean in case if one drive fails I wish to have a chance to recover that data but I guess I need more drives for this fuctionality.
I'm planning to make 2x2TB NAS on a board without any RAID controller and only 2x SATA ports. Will probably use FreeNAS on USB stick.

it's basically a raid1

ZFS mirror - RAID1
ZFS raidz - RAID5
ZFS raidz2 - RAID6

There are others but those are the most common.
 
it's basically a raid1

ZFS mirror - RAID1
ZFS raidz - RAID5
ZFS raidz2 - RAID6

There are others but those are the most common.

Thanks for info. I meant more like ZFS on 2 drives without mirror won't protect any data in my meaning.
2TB is enough but I may need some more in the future.

I was also thinking to put M1015 into the NAS but I still need that controller in other PC so I'd rather set ZFS mirror and leave it like that.
 
Thanks for info. I meant more like ZFS on 2 drives without mirror won't protect any data in my meaning.
2TB is enough but I may need some more in the future.

I was also thinking to put M1015 into the NAS but I still need that controller in other PC so I'd rather set ZFS mirror and leave it like that.

The internal issue I'm having now is being stuck with raid-z and not being able to expand by adding more disks. The only way to expand in the future is for me to get larger drives and swap them one-by-one, but I'm terrified of drives bigger than 3TB. It's just a lot of data to have on one disk. If I ever want to go with a 5+ disk setup in the future, I'd need to create an entirely new volume, which means I'd have to have ALL disks hooked up at the same time at some point. That would be a horrible migration... I hate migrating data like that!!!

Now that I just typed it out, I think it's inevitable at some point anyway...

Woomack, don't let a mirror setup keep you from doing it. It's actually the raid type I've favored the last decade. I've only done Raid-5 twice for myself, and that was with volatile data that I was willing to chance.

Right now, I use a Synology 2-bay enclosure as a Raid-1 setup with 2x consumer-grade 3TB drives. I'm planning on backing up my ZFS setup (4x3TB Raid-Z1, total of 8TB-9TB) to the Synology enclosure. I know it supports less capacity, but I'd only back up data that's irreplaceable (videos, pictures, documents). Movies and such, I'll chance...
 
I just got 2x 2TB WDC Purple ( ready to work 24/7 ) and I will probably make mirror. Right now I'm trying to make all run on AMD AM1 platform ( undervolted should have max ~10W under load ). I have one slightly damaged ITX case with space for 2x 3.5" HDD. So small and good enough for my needs. Was also free and free is always good :) Board has only 2x SATA ports but I will run system from small USB drive.

I hate RAID5. At work I try to stick to RAID 1 or 10 for higher speed and easier recovery but most servers that I'm selling are working on smaller databases. I see point of RAID5 only for backup/rarely accessed data.
 
I just got 2x 2TB WDC Purple ( ready to work 24/7 ) and I will probably make mirror. Right now I'm trying to make all run on AMD AM1 platform ( undervolted should have max ~10W under load ). I have one slightly damaged ITX case with space for 2x 3.5" HDD. So small and good enough for my needs. Was also free and free is always good :) Board has only 2x SATA ports but I will run system from small USB drive.

I hate RAID5. At work I try to stick to RAID 1 or 10 for higher speed and easier recovery but most servers that I'm selling are working on smaller databases. I see point of RAID5 only for backup/rarely accessed data.

Don't jinx me. :) I plan on using Raid-Z1 (Raid5) with ZFS...
 
I'm not saying that RAID5 is bad. It's just slow and makes drives to work more than they need ( reason why it shouldn't be used on SSD at all ). Good side is larger space so lower cost per GB.
At least in servers I don't like RAID5 as it's harder to rebuild array if controller dies and you won't be able to get another the same one. ZFS is fixing some standard RAID5 issues as I've learned today so at least for NAS it should be fine.

Btw. after long fight I finally made FreeNAS run. I just couldn't make it run using ISO for USB drive and last RC version didn't work for my drive/board config ( some weird errors or no boot at all ). Last official stable version is booting from USB stick without issues.
I've configured iSCSI so far and all seem to work fine.
 
OK, so in reading more and more about ZFS, I'm not even more confused and cannot decide for me on what to do. Perhaps I need to find a better article...

My use case will be to have a HUGE storage for movies, home videos, home pictures, documents, music, my ISO/installation archive, and anything else I may throw at it. I don't expect to see high IO as it's mostly me and my wife that will be writing to it. I have 6x 3TB enterprise-class 7200rpm Seagate Constellation ES2 drives for this, and I'm okay with Raid-z2 (ultimately, I'll get just over 10TB of use).

I then will use SSDs to run my VMs, and I've no idea if I should go with ZIL or L2arc on SSDs. I'm trying to maximize the $$ I spend vs the performance that's gained.

The build is an 8-core FX-8350 with 32GB of ECC RAM. 2-3 VMs will run passthrough to become desktops, and the rest of the VMs will be accessed remotely only. The 2-3 VMs that will become desktops will be HTPCs/TVs.

I think 256GB of SSD should be enough to run say 6-8 VMs. I can do Raid-1 if recommended, but thought maybe I could run a single SSD and run a backup solution as well.

  1. The storage solution will be a VM on ESXi of course. Napp-it, FreeNAS... how do I decide which is most appropriate for me?
  2. Should I put the storage VM on a dedicated USB or dedicated SSD and how should I plan on redundancy?
  3. If I should use ZIL or L2arc, what recommendations as far as SSD sizing and configuration?
  4. I'm concerned about having too many layers that may fail. Any single failure would render my entire storage solution useless: the storage VM going down, the ESXi host's hardware going down, or the storage VM's hardware (SSD or USB) going down

The more I read, the more questions I have... the less likely this darn project will ever finish.

I think I need a book to read...
 
http://www.napp-it.org/manuals/index.html

Generally a hardware raid1 for your datastore is preferred, which is where you'd store the NAS VM. One thing you can do is install two identical drives for datastores and mirror your drives. So you'd install two drives, build your NAS VM on drive1 with a 20gb drive, then add a 20Gb drive on drive2 for the same VM, then mirror the drives in your NAS OS.

I use two small consumer SSD for ZIL, one 256Gb SSD for l2arc. 32Gb of ram is allocated to the NAS VM (total 48Gb in the box).

Right now I have 7 VM on that box, 6 of them run off of datastore published via NFS to ESX.

I decided against an SSD only pool only because once VMs are booted there is really so little I/O that I didn't feel it was necessary to add more SSD drives for negligible improvement.

If I had an SSD only pool, it would be two SSD's attached to my NAS VM, mirrored pool built in the VM, attached via NFS (and separate vnetwork) to my ESX box.
 
http://www.napp-it.org/manuals/index.html

Generally a hardware raid1 for your datastore is preferred, which is where you'd store the NAS VM. One thing you can do is install two identical drives for datastores and mirror your drives. So you'd install two drives, build your NAS VM on drive1 with a 20gb drive, then add a 20Gb drive on drive2 for the same VM, then mirror the drives in your NAS OS.

I use two small consumer SSD for ZIL, one 256Gb SSD for l2arc. 32Gb of ram is allocated to the NAS VM (total 48Gb in the box).

Right now I have 7 VM on that box, 6 of them run off of datastore published via NFS to ESX.

I decided against an SSD only pool only because once VMs are booted there is really so little I/O that I didn't feel it was necessary to add more SSD drives for negligible improvement.

If I had an SSD only pool, it would be two SSD's attached to my NAS VM, mirrored pool built in the VM, attached via NFS (and separate vnetwork) to my ESX box.

Thanks, see dub.

As far as the redundant SSDs, are you saying...
-Install 2x SSDs, say 256 GB each, no onboard raid, and present it to ESXi as two separate datastores.
-Add a 20GB virtual hard drive, one from each SSD datastore, so that the napp-it VM has two separate virtual hard drives
-After installing napp-it on one hard drive, then within napp-it I could mirror to the other virtual 20gb drive?

This way, I'd then have about 220GB left from each SSD, so 440GB left for use for non-redundant VMs.

Or... I could go with Raid-1 onboard raid 2x256GB SSDs, and have everything on SSDs redundant. Did I understand you correctly?

And my board is maxed at 32GB of RAM, so I'll have to be careful with how much I dedicate to the NAS... right now I think I've only dedicated 4GB of RAM to napp-it.
 
Thanks, see dub.

As far as the redundant SSDs, are you saying...
-Install 2x SSDs, say 256 GB each, no onboard raid, and present it to ESXi as two separate datastores.
-Add a 20GB virtual hard drive, one from each SSD datastore, so that the napp-it VM has two separate virtual hard drives
-After installing napp-it on one hard drive, then within napp-it I could mirror to the other virtual 20gb drive?

This way, I'd then have about 220GB left from each SSD, so 440GB left for use for non-redundant VMs.

Or... I could go with Raid-1 onboard raid 2x256GB SSDs, and have everything on SSDs redundant. Did I understand you correctly?

yes, or a third option just attach the SSDs to your NAS VM and do a ZFS mirror in the VM itself. So you'd pass an SSD pool to ESX and a spinning drive pool to ESX if you wanted.

g0dM@n said:
And my board is maxed at 32GB of RAM, so I'll have to be careful with how much I dedicate to the NAS... right now I think I've only dedicated 4GB of RAM to napp-it.

ZFS will use as much RAM as you can give it. I try to normally give my NAS VM 2/3 of total ram in the ESX box. ZFS will cache data from your pool to RAM first, L2ARC second. Obviously RAM is the fastest option, so the more you can give it the better.

http://forums.freenas.org/index.php?threads/formula-for-size-of-l2arc-needed.17947/#post-97362

A few thoughts there on L2ARC sizing
 
Some of things I've learned after running a ESXi box for a couple of years:

  • Any VM on a SSD as the datastore(s) directly connected/accessed by the mobo/ESXi OS, will make a huge difference compared to running the VM on the ZFS file system itself (whether via NFS or iSCSI), even if you add on a SSD as a ZIL drive. SSD ZIL drive helps a little, but no where near as much as the VM on a SSD. It's like going from a latop HDD, to a SSD. The difference is very noticeable in nearly everything, if you plan on running your VMs off the ZFS storage.
  • More ram for the ZFS VM is always nice, but if all you are serving up is media files (large videos, DVD/BD rips, even mp3/FLAC), then it's not of much benefit (if any at all). I went from 16GB, to 3GB on my ZFS file server, and haven't noticed a difference in how fast media files are served, even when multiple VMs/network devices are accessing it. I allocated the extra ram instead to the other VMs.
  • Hardware RAID1 for the datastore is nice, but if you want to keep the cost down, you can just copy the VMs over to another datastore or networked drive for backup purposes. Check out ghettoVCB for backing up your VMs (something that is normally not possible in free ESXi without having to shutdown the VM before copying it).
  • The big gotcha with ZFS is when it comes to expanding storage space. Like hardware RAID, you cannot simply add on another drive and gain an extra (in your case) 3GB to your Raid-z2 pool. You either have to destroy the current pool, and recreate it with the additional drive, or add on the extra 3GB drive as a separate pool, which won't be a part of the Raid-z2 pool so it won't have the redundancy (think of it as JBOD with pools). So if you really want to go with ZFS, plan ahead, and buy as many drives as you can now, not when you think you'll need it. You have 6 3TB drives, and are thinking of a Raid-z2 pool; will the 12TB of storage space you will have be enough for your foreseeable needs in the next year or two? You could also replace the 3TB drives with 4TB drives in the future to expand as well, but you also have to think about rebuild times if a drive goes bad. Larger drives = longer rebuilds = lots of drive trashing = chance of another drive or 2 also dieing in the process, which is not good even with a R-z2 setup.
  • Napp-it and ZFS shares: Works great, just like sharing and setting permissions on a Windows share, but it doesn't work with Android.:sly:
    I've looked into it many, many times, and the best reason I've been able to find why it doesn't work, is because of the method that the Android OS/Libraries use to login/authenticate to a Samba share does not work correctly with OpenIndiana/OmniOS/other similar OS's. Android login into a Windows share works fine, but try to login to a OI/OmniOS share, no go. So if you have some android devices that you plan on directly sharing files with, you'll have to do it via Windows shares, not ZFS. Your other option is a DLNA server like Plex, but that's a whole other can of worms to deal with.
  • You want to run alot of VMs on your box, but keep in mind that ESXi sets aside alot of ram per VM/CPU/Ram amount used per VM, so you may end up with not being able to allocate enough recommended ram for 7 VMs. I only run 5 VMs with this setup for the ram; 3GB, 1GB, 2GB, 4GB, 2GB. That's only around 12GB of 16GB of ram being used in my ESXi box, yet, according to ESXi, I only have 772MB of ram available that I can still allocate to a VM.:eh?:
    It should still be doable though, as I was also running 7 VMs with 32GB of ram. You'll just have to decide which VMs are more important and need more ram as you set them up. You can always change the values afterwards easily.
  • Best practice with ESXi: Install ESXi on a bootable USB stick. 4-8GB is all you need. This way, if anything happens with the internal drives, it won't affect your ESXi OS. And you can easily mirror the USB stick for backup purposes. ESXi when installed on a USB stick, will default to a embedded install, which minimizes writes to the device. Basically it'll only write to the USB stick when you make changes to a VM or hardware, so the USB stick (as long as its a good one) should last longer then everything else.
  • If you are worried about having "too many layers that may fail", don't be. I run my ESXi with plenty of layers just to minimize downtime, when it comes to storage space.
    The ESXi is on the USB stick which if it dies, will have no effect on anything else. ESXi will happily plod along and just give me a warning to replace the USB stick. Don't even need to shutdown the box for that.
    The 5 VMs are spread out on 2 SSDs. Each VM is copied (backed up) to a datastore on the ZFS VM, as well as a 2nd computer. If any SSD dies, I can run the affected VMs directly off the ZFS datastore till I get a replacement SSD.
    And with the ZFS server just serving up files and not VMs, if that should go down it wont affect anything else either.
    Aside from that, only physical hardware (besides SSDs/HDDs) would be the only real concern.

I know this didn't directly answer all your questions, but it's just food for thought from my own time with ESXi at home.
 
Mpegger said:
Any VM on a SSD as the datastore(s) directly connected/accessed by the mobo/ESXi OS, will make a huge difference compared to running the VM on the ZFS file system itself (whether via NFS or iSCSI), even if you add on a SSD as a ZIL drive. SSD ZIL drive helps a little, but no where near as much as the VM on a SSD. It's like going from a latop HDD, to a SSD. The difference is very noticeable in nearly everything, if you plan on running your VMs off the ZFS storage.

This can be correct, but is not always correct. If you're trying to run VMs off of a poorly designed zpool, like a raidz2, this is correct. It's all about design.

mpegger said:
More ram for the ZFS VM is always nice, but if all you are serving up is media files (large videos, DVD/BD rips, even mp3/FLAC), then it's not of much benefit (if any at all). I went from 16GB, to 3GB on my ZFS file server, and haven't noticed a difference in how fast media files are served, even when multiple VMs/network devices are accessing it. I allocated the extra ram instead to the other VMs.

This is not accurate, but to each his own. Anyone who knows ZFS will tell you the more RAM the better. But if you want stuff limited to spinning disk i/o rather than RAM i/o, so be it.

mpegger said:
Hardware RAID1 for the datastore is nice, but if you want to keep the cost down, you can just copy the VMs over to another datastore or networked drive for backup purposes. Check out ghettoVCB for backing up your VMs (something that is normally not possible in free ESXi without having to shutdown the VM before copying it).

I've never done this as I only have VMs with redundant disks.

mpegger said:
The big gotcha with ZFS is when it comes to expanding storage space. Like hardware RAID, you cannot simply add on another drive and gain an extra (in your case) 3GB to your Raid-z2 pool. You either have to destroy the current pool, and recreate it with the additional drive, or add on the extra 3GB drive as a separate pool, which won't be a part of the Raid-z2 pool so it won't have the redundancy (think of it as JBOD with pools). So if you really want to go with ZFS, plan ahead, and buy as many drives as you can now, not when you think you'll need it. You have 6 3TB drives, and are thinking of a Raid-z2 pool; will the 12TB of storage space you will have be enough for your foreseeable needs in the next year or two? You could also replace the 3TB drives with 4TB drives in the future to expand as well, but you also have to think about rebuild times if a drive goes bad. Larger drives = longer rebuilds = lots of drive trashing = chance of another drive or 2 also dieing in the process, which is not good even with a R-z2 setup.

Agree completely, but honestly if you're looking to build a sturdy ZFS all-in-one for VMs, you should only be using mirrors anyway, so you can easily add 2 more drives at a time.


mpegger said:
Napp-it and ZFS shares: Works great, just like sharing and setting permissions on a Windows share, but it doesn't work with Android.:sly:
I've looked into it many, many times, and the best reason I've been able to find why it doesn't work, is because of the method that the Android OS/Libraries use to login/authenticate to a Samba share does not work correctly with OpenIndiana/OmniOS/other similar OS's. Android login into a Windows share works fine, but try to login to a OI/OmniOS share, no go. So if you have some android devices that you plan on directly sharing files with, you'll have to do it via Windows shares, not ZFS. Your other option is a DLNA server like Plex, but that's a whole other can of worms to deal with.

No issues here with a Plex VM running next to my NAS VM, it serves everything with ease.

mpegger said:
You want to run alot of VMs on your box, but keep in mind that ESXi sets aside alot of ram per VM/CPU/Ram amount used per VM, so you may end up with not being able to allocate enough recommended ram for 7 VMs. I only run 5 VMs with this setup for the ram; 3GB, 1GB, 2GB, 4GB, 2GB. That's only around 12GB of 16GB of ram being used in my ESXi box, yet, according to ESXi, I only have 772MB of ram available that I can still allocate to a VM.:eh?:
It should still be doable though, as I was also running 7 VMs with 32GB of ram. You'll just have to decide which VMs are more important and need more ram as you set them up. You can always change the values afterwards easily.

Any ESX box should have the ram maxed, it just makes sense. I've not experienced what he is describing as I have ESX ram overhead limited via the gui.

DO NOT UPGRADE THE HARDWARE VERSION if you are running ESX 5.5. If you do, you can only managed the VM with vmware workstation (limited access) or vcenter. Yes, you can get vcenter and virtualize it as well, if you want.

mpegger said:
Best practice with ESXi: Install ESXi on a bootable USB stick. 4-8GB is all you need. This way, if anything happens with the internal drives, it won't affect your ESXi OS. And you can easily mirror the USB stick for backup purposes. ESXi when installed on a USB stick, will default to a embedded install, which minimizes writes to the device. Basically it'll only write to the USB stick when you make changes to a VM or hardware, so the USB stick (as long as its a good one) should last longer then everything else.

Yes this is the standard recommendation for a home user.
 
Great posts, guys. I do have my ESXi installed on a USB and just have to figure out how to back it up while it's running.

I feel comfortable with Raid-Z2 since my VMs are going to live on SSDs. I guess I don't see harm in running Raid-1 onboard RAID with 2x 256GB SSDs and dumping my VMs onto that. With 32GB of RAM, say 12GB dedicated to ZFS, I should be left with plenty for 7 VMs. I think I'm going to build 2-3GB VMs, and then one 6GB VM to be my work horse, for encoding and whatnot. I'll have to toy with it to see how it'll do.

So for starters I'll stick with:
-2x 256GB SSDs onboard Raid-1 as an ESXi datastore (no ZFS layer)
-6x 3TB spinning disks in Raid-Z2
-USB Flash Drive for ESXi 5.5 (which it is right now)

I'll have to figure out how to back up the USB flash drive with ESX and the VMs that will be on the SSDs. I'm not sure how I'll back up from a datastore into ZFS, but I'm sure there's a way... I assume I'll just have to create a separate zpool and present it to the ESX host via NFS.
 
I'm running FreeNAS booting from 32GB Patriot Spark flash drive with 2x2TB WDC Purple on 4 core APU with 8GB RAM without any performance or stability issues, In last days I had some electricity issues at home and I noticed something weird. Each time when I restart NAS, it's using different IP even though I set static IP in configuration. I'm using iSCSI and it's working without any issues till I restart NAS and then I have to manually connect it again clearing old connection data. It's of course annoying when I have to restart it more often like in last days. I'm just not sure if it's normal or I missed something in configuration.
 
Where is the configuration information stored, and when you go back to check it after a restart is the static configuration still in there... or did it fall back to DHCP? You could try static DHCP from your router...

Also, the main difference with your setup is that you're running FreeNAS bare metal. I'm going to be running the ESXi hypervisor and then building ZFS as a VM, along with several other VMs.

I'm running FreeNAS booting from 32GB Patriot Spark flash drive with 2x2TB WDC Purple on 4 core APU with 8GB RAM without any performance or stability issues, In last days I had some electricity issues at home and I noticed something weird. Each time when I restart NAS, it's using different IP even though I set static IP in configuration. I'm using iSCSI and it's working without any issues till I restart NAS and then I have to manually connect it again clearing old connection data. It's of course annoying when I have to restart it more often like in last days. I'm just not sure if it's normal or I missed something in configuration.
 
I'm running FreeNAS booting from 32GB Patriot Spark flash drive with 2x2TB WDC Purple on 4 core APU with 8GB RAM without any performance or stability issues, In last days I had some electricity issues at home and I noticed something weird. Each time when I restart NAS, it's using different IP even though I set static IP in configuration. I'm using iSCSI and it's working without any issues till I restart NAS and then I have to manually connect it again clearing old connection data. It's of course annoying when I have to restart it more often like in last days. I'm just not sure if it's normal or I missed something in configuration.

Can you set a dhcp reservation on your router? I normally do that in case I need to blow away an OS and rebuild a VM

Great posts, guys. I do have my ESXi installed on a USB and just have to figure out how to back it up while it's running.

I feel comfortable with Raid-Z2 since my VMs are going to live on SSDs. I guess I don't see harm in running Raid-1 onboard RAID with 2x 256GB SSDs and dumping my VMs onto that. With 32GB of RAM, say 12GB dedicated to ZFS, I should be left with plenty for 7 VMs. I think I'm going to build 2-3GB VMs, and then one 6GB VM to be my work horse, for encoding and whatnot. I'll have to toy with it to see how it'll do.

So for starters I'll stick with:
-2x 256GB SSDs onboard Raid-1 as an ESXi datastore (no ZFS layer)
-6x 3TB spinning disks in Raid-Z2
-USB Flash Drive for ESXi 5.5 (which it is right now)

I'll have to figure out how to back up the USB flash drive with ESX and the VMs that will be on the SSDs. I'm not sure how I'll back up from a datastore into ZFS, but I'm sure there's a way... I assume I'll just have to create a separate zpool and present it to the ESX host via NFS.

It will have to be hardware raid for ESX to see it as one drive. Software raid will still appear as separate drives for ESX. Been there, done that, cursed much.

I have dual L5630, my plex VM has 8 vcpu. Pegs them all when watching a 12Gb 1080p movie. Of course, I have Plex set to "make my cpu hurt" for transcoding.

It is horribly easy to overprovision vcpu and vmem, but you probably already know that. I always start small and add IF I can prove to myself that I need to.
 
I have 3 PCs and 2 external drives connected and all are on static IP so I just disabled DHCP in router. Will see if it helps.
Weird was that in FreeNAS system/config there was static address but after each restart it was changing to 0.0.0.0 and FreeNAS was using DHCP address instead, always changing one number up or down.
Sometimes obvious things are hard to find ;)
 
I have 3 PCs and 2 external drives connected and all are on static IP so I just disabled DHCP in router. Will see if it helps.
Weird was that in FreeNAS system/config there was static address but after each restart it was changing to 0.0.0.0 and FreeNAS was using DHCP address instead, always changing one number up or down.
Sometimes obvious things are hard to find ;)

IP addressing issues were just one of many reasons I didn't stay with Freenas back in the v7 days.
 
Back