• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

media and file server build: FreeNAS, RAID, ZFS?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

gsrcrxsi

Member
Joined
Feb 21, 2008
Location
Baltimore, MD
OK. So I'm going put together a new PLEX server that will double as a NAS. I have some questions from those more familiar with FreeNAS and other storage solutions.

First here is my proposed plan.

I picked up a server off ebay:
Supermicro 3U chassis: CSE-835TQ-R920B
-8x 3.5" hot swap bays
-2x 5.25" bays
Supermicro X9DRi-LN4F+
-onbaord 4x Intel Gigabit LAN
48GB (12x4GB) DDR3-REG
2x Intel Xeon E5-2650L (will be swapped for 2x E5-2680v2)
1x500GB WD 7200RPM drive (meh, will be swapped for an SSD for OS)
2x 1280W (really 1000w @110v) redundant/hot-swap PSU

after a few days of research I think i decided on the storage configuration.

8x2TB drives to start
FreeNAS (ZFS)
pool = 4x mirror vdevs (which if i'm understanding correctly will be essentially raid 10, yes?)

Advantages I see:
-Speed. this will essentially be 4x mirror vdevs (raid1), striped together (raid 0).
-Redundancy. I can have up to 4 disk failures IF AND ONLY IF they fail in separate vdevs
-Expandability. I could theoretically add more space by adding more mirror vdevs to the pool with more pairs of disks, OR (more similar to me since ill be populating all 8 of my drive bays) replace disks one by one to expand the individual vdevs.
-fast rebuilds. recovering from a drive failure only has to do a straight copy from the mirror on that single vdev. reducing workload on all other drives during rebuild.
-ZFS bonus - rebuilds only copy actual written data, not the whole drive bit for bit like other raid solutions.

Disadvantage:
-not great space efficiency (50% useable space)
-2 disk failures in a single vdev (taking out the whole vdev) kills the whole pool. (using RE4 drives however so i'm doing my best to mitigate that)

thoughts? am i correct on all this?

anyone think I'm doing something stupid here and want to recommend a different approach?
 
What is the goal for this storage? Do you want to prioritise read or write performance? Redundancy? Space efficiency? Gigabit ethernet will limit transfer rates anyway unless you're considering going 10G. I'm not familiar with ZFS raid performance tradeoffs. Would RAIDZ be of interest?

I personally would look at higher capacity drives from the start. For example, I'm using unraid myself with currently active 6x 3TB drives in 4 data + 2 parity arrangement for 12GB capacity surviving 2 simultaneous drive failures without data loss. It isn't the high performance choice though and writes particularly suck in this arrangement due to the way it calculates parity, not a problem for ZFS which may have different considerations.
 
The goal is to be fast storage that can serve multiple clients (mainly a plex server too). i want at least 1 drive fault tolerance. I think my plan gives me all that.

after doing some reading i think the raidz options have the same performance penalty as regular raid5/6 options. with rebuilds not being quite as slow simply because it's not having to re-write ALL bits, but only the data that's there.

I'd love for someone familiar with ZFS to pop in.
 
Quite the beast of a server! Very nice!

Not sure how fixed you are on os, etc but, have you thought about whs2011 and stablebit drivepool? Atm, have 8x2TB drives connected with full redundancy. Tested with 5 end clients (on the internal 10Gbit lan) all streaming 1080p movies with no issues whatsoever. Just mentioning that to see if it helps give you an idea of server performance even without raid. With stablebit, I can lose up to 4 drives without data loss. Adding/removing drives from the pool is as simple as shutting down, plugging (or unplugging as the case may be), power up then just add the new drive to the pool (in the case of removing a drive, the server will automatically start the full re-duplication process).

Just a different angle for potential consideration :)
 
Trying to stick with free options.

I have a copy of Windows 10.

But windows doesn’t support ZFS, and I like what I’m reading so far for ZFS benefits.

Is WHS free? Free+ZFS is why I’m leaning towards FreeNAS right now.

Streaming blurays isn’t a major issue. They at most use 50Mbps at full quality. You can stream like 20 blu rays on a single 1GbE. Most of my collection is encoded down to 10-20Mbps anyway. I’m really thinking about multiple file transfers from multiple clients. I’d like to be able to have a 4x1Gbps “pipe” between the server and the network to handle multiple file transfers.

I could use this server as a surveillance server as well recording multiple streams from IP cameras. So I want the bandwidth and speed.
 
Yeah...that's the downside. If you look around, whs can be had for $0 - $35ish, drivepool's $30. That said...I see that drivepool will actually work with win10 so if you have a spare key, that could be another option.
 
I personally would look at higher capacity drives from the start.
I'm in agreement with this. I would check the Backblaze drive failure statistics to see which drives look particularly good. (NB: NO 2 TB SEAGATE DRIVES!!!) Some of the larger drives have performed better in the long run IIRC and the latest high capacity (like 8TB) drives are starting out really good, though they have less run time on them. The 2TB drives were fairly early in the development of the really large drives and I think the manufacturers have learned a lot that they are applying to newer drives. I would also look up the sustained transfer rates for the various drives. I think the larger drives provide higher rates and once the cache is exhausted, that's going to limit data transfer speeds.

Personally, I'm a fan of Linux and have been fooling around a bit with ZFS lately. I don't run FreeNAS but were I to put together purely a storage box, it would be high on my list of options. I have decades of experience with Linux so I stuck with that. I haven't put anything mission critical on ZFS, but what I have done looks good. Just last week I had a 2 TB drive fail in a 6 drive RAIDZ2 pool. This was intentional, I had a 2TB Seagate drive with over 3K remapped sectors :)shock:) and pushed it by loading some backups and then performing a scrub. ZFS handled this with no drama and I was easily able to shut down the machine, replace the failed drive, and add the replacement back to the pool.

One thing I like about ZFS is that it pulls together a lot of things that are part of a bunch of other tools, like file system, RAID, logical volume management and snapshot/backups. If the experimentation goes well I may migrate my primary file server from typical Linux RAID (md) and file system (EXT4) to ZFS.

Of course if you're using FreeNAS, a lot of the details are probably hidden behind some management tools. I think that's the benefit that FreeNAS provides over ordinary BSD.
 
The 2TB drives are a cost thing.

8x2TB = $260. For WD 5400rpm RE drives. (Used from cw here on the forums) I have 4 drives now from him in my Synology NAS and they work great.

Even 4x 8GB drives would run me about $1000 or more. 4x the cost For 2x the space, and 1/2 the performance (2x raid0 instead of 4x raid0). That’s a tough pill to swallow when my storage needs aren’t currently needing even 8TB
 
OK, if you've already got money invested in 2TB drives then that changes the economics. If you're planning on getting more of the same drives then it's hard to beat that on cost. (The other 6 drives in my RAIDZ2 are from the same source. ;) )
 
My father runs a very similar server but for seti crunching.

He always runs a 3-drive raid 5. Currently with 3x1TB drives. His system has been down for a little while and I took a look at it today and sure enough one drive of the raid 5 failed. Swapped it out and set the raid to rebuild.

It was a WD enterprise drive too. With a couple years of 24/7 use. Tested the bad drive with WD data lifeguard and it won’t run any dish tests. Fails everywhere, bad sectors all over the place.
 
I forgot to mention another feature of ZFS that I like: compression. If the files you store are compressible it will increase capacity and throughput (since compression results in less data to read/write.) This won't help with previously compressed data like pictures or video files but for things like documents it will be a win.
 
I use FreeNAS (and open media vault, but planning to get the OMV setup migrated entirely into my FreeNAS box in a week).

I currently have 4x2TB drives in a RAIDZ config, but will be rebuidling everything into 6x4TB for the extra space in a RAIDZ2 (drives taken from my OMV box).

The system works quite well as long as you follow necessary intstructions and guides when upgrading the software between major releases (had to do some back-end CLI work to fix some issues this last time).

I agree with the others to move to more space if you're able, but if you don't have the cash or the need for more space it isn't a big deal.

In other news, regarding expandability, it looks like it will soon be possible to add additional drives to a vdev pool and it will just expand. I saw it posted on /r/freenas a couple weeks ago, apparently a new feature being released in FreeBSD for ZFS.
 
The 2TB drives are a cost thing.

8x2TB = $260. For WD 5400rpm RE drives. (Used from cw here on the forums) I have 4 drives now from him in my Synology NAS and they work great.

Even 4x 8GB drives would run me about $1000 or more. 4x the cost For 2x the space, and 1/2 the performance (2x raid0 instead of 4x raid0). That’s a tough pill to swallow when my storage needs aren’t currently needing even 8TB

+1 i just buy a 2tb drive for $40 and do an online raid expansion everytime i need more space, cheap economical and scalable till i get to 12 drives, then i can just buy a second $20 controller and start on adding another 12 drives if need be lol with the controller i use you can span a raid across multiple of them. currently am at 4x2tb and i have another sitting waiting to be added when i have time. good thing about having a hardware raid controller you can expand an array with it online and never have any downtime. though for me to expand from 3x2tb to 4x2tb it took around 72 hours lol.

doing hardware raid is super simple if using the utilities in the operating system i never have to reboot or mess with os configs or make a fuss about anything just pop drive in, open software, "add disk to array" wham bam thank you mam. iirc thideras actually went from zfs to windows server using the built in windows software raid and he seems to enjoy it, well last i heard.

im running server 2012 datacenter i got from a buddy who has dreamspark. kind of wanting to try out server 2016 since it has hardware pass through.
 
In other news, regarding expandability, it looks like it will soon be possible to add additional drives to a vdev pool and it will just expand. I saw it posted on /r/freenas a couple weeks ago, apparently a new feature being released in FreeBSD for ZFS.
i was under the impression that you CAN add drives to the pool. but not the vdev.

so to get more space, you just add another vdev. and it all gets striped together (raid0) when adding to the pool.

that's why i was going to go the "raid10" pool of mirrors setup.

so to start i will have 8x 2TB drives

organized as 4x [2x2TB mirrors]
the pool stripes together those 4 mirror vdevs.

down the road i pick up a pair of 4 or 8TB drives.

swap out one disk and rebuild the individual mirror.
swap out the other disk in the same mirror, rebuild again.
expand that vdev to the full size. this should then increase the whole pool size.
if i wanted to replace all disks, you just do this for all the vdevs.

this is possible right? this is what i've gathered from my research.
 
Back