• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

ZFS stripes, mirrors and RAIDz setups

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Mpegger

Member
Joined
Nov 28, 2001
I finally got a 16-bay drive enclosure for the rebuild of my ZFS file server/ESXi box, and am still unsure and confused on what the drive setup should be.

Nearly all of the NAS usage comes from me in the household. Hardly anyone else uses it, other then for the SageTV PVR that stores the recorded TV shows on the pool (and even that is hardly used now-a-days). So, I've decided to not try to maximize space in the pool, but to try and maximize speed, as I really want to setup a iSCSI drive with MPIO for my main PC to house mostly my games, and allow the pool to run VMs directly off it without the horrible speeds I had before.

I currently have 10 2TB drives in my ZFS pool. I have 2 spare 2TB drives as backups not in the pool at all, so I have a total 12 2TB drives. If I go with the simplest paired Mirror setup for the best read speeds, I would have in the end 12TB of storage space, made up of 6 pairs of 2TB drives, each in a mirror, with the 6 pairs striped, in essence, a RAID10 setup. So far I understand all that.

What I am confused about is, if (when) I add in 4 more 2TB drives, to create 2 more paired mirrors, how exactly does that get added into the already created pool? :confused:
 
If you add more vdevs, it simply extends the space of the pool. Data is not automatically rebalanced across the new drives.
 
If you add more vdevs, it simply extends the space of the pool. Data is not automatically rebalanced across the new drives.

Is there a way to rebalance? It would make adding in any more drives moot if I can't, and I should probably just add in 1 or 2 extra as a hot spare if it cannot rebalance.
 
Writing data rebalances across all the drives. If you were to move the data between pools, a rebalance should happen.

There is no explicit feature you can use to do it for you.
 
Maybe I'll just stick then with a 10 or 12 disk setup for the main pool, and use a 2nd pool strictly for backup purposes of the main pool's important data (I know, it should be a separate PC used for backup, but I really want to cut down on physical running PCs). There is no point to adding in more vdevs to the main pool if I have VMs on it, and the data won't get rebalanced.

One other question I have is about LACP and SMB from the ZFS share. I have read that a limitation in serving files via SMB is limited to one connection per user, but other then that it wasn't too clear on the specifics of that limitation. I will have 5 ports on my main PC, and I plan on dedicating at least 3 to iSCSI duty with the ZFS box. That would leave 2 ports to access the SMB shares/VMs on the server. I'd like to speed up file access over SMB, but if SMB behind LACP would only allow 1 file to be read at a time, regardless of how many links are available, then I'm probably better off with 4 ports for iSCSI, though 2 ports might still be good for SMB+VM access at the same time.
 
I'm not sure what they mean by "one connection per user". I'm connected to multiple shares from the same system and I've never had an issue. I've also done way than one file open at a time.
 
From the very brief description, I take it to mean that SMB will never use more then 1 connection between host and client, even in a LACP setup. So yes, you can read multiple files, but it wouldn't use more then 1 link. Wish I had the quad nic here already to test it out, but I'm not even sure how to test such a case.
 
I have a 4 connection LACP setup on both my servers. A single connection won't use more than one link, which is what I think they mean. But that is a base "rule" of LACP.
 
From the very brief description, I take it to mean that SMB will never use more then 1 connection between host and client, even in a LACP setup. So yes, you can read multiple files, but it wouldn't use more then 1 link. Wish I had the quad nic here already to test it out, but I'm not even sure how to test such a case.

Samba, on a single nic, can handle hundreds of user connections easily. It wouldn't be very useful otherwise. For example here at home I have a samba share to a computer with a single nic and it serves 3 computers concurrently + whatever random client (i.e. the wife connects to play with her pictures) that you can throw at it.

The limitation of a single nic will be how quickly the access is when there is a big file transfer occurring
 
Rereading the official VMWare posting about LACP, it does indeed seem like a limitation based on LACP, even if "Route Based on IP Hash" is used. So with that in mind, I will dedicate the quad port NIC to the iSCSI setup I plan to implement between my PC and the server. 4 Gb ports, and 6 striped mirrors almost makes me want to move everything over to the server, and just have a single SSD in my main PC. Would keep everything nicely organized.

However, that's still a week or two away, as the nic I just received yesterday is anything but "fully tested working 100%" as the seller stated. Numerous SMD caps are broken/missing on the rear of the quad nic (I didn't even open the static bag and saw all of it), so now starts the process of getting it replaced by them, or getting my money back. :mad:
 
Well that was easier then I thought. Seller immediately refunded the full amount. I'm still out a quad nic though and will need to scourer eBay for another.

However, the damage may possibly be repairable. I'd have to check the pads (most from the naked eye inspection look fine) but more importantly, I'd need to find out what kind of SMD component was on the rear corner of the card if it's to be repaired. Guess it can be a backup if I can repair it.
 
Just thought of another question after I got in the new quad port NIC. Can the PC connect directly to the server without transversing the switch when setting up a multi-link iSCSI connection? Would it require cross-over cables, or will the gigabit ports auto configure? Would there need to be any special configuration software side to getting this to work (IPs in the VMs, ESXi vSwitch configured in a specifc way)?
 
Just thought of another question after I got in the new quad port NIC. Can the PC connect directly to the server without transversing the switch when setting up a multi-link iSCSI connection? Would it require cross-over cables, or will the gigabit ports auto configure? Would there need to be any special configuration software side to getting this to work (IPs in the VMs, ESXi vSwitch configured in a specifc way)?

In general most NICs now days auto-sense. You would want to have the IPs statically assigned
 
If you are looking for a lot of speed, consider adding SSDs to the mix. If you want better write speed, add two mirrored SSDs as "Log" disks (ZFS Intent Log). For better read speed, fill up the box with as much memory as possible, and if that is not enough, add a single SSD as a L2ARC. In most cases, L2ARC is unnecessary though. Your best bet is to test different configurations and see what works better... one big raidz2 array with a hotspare, raid-10 like setup (stripe of mirrors) or many smaller raidz's to spread out the IO.
 
If you are looking for a lot of speed, consider adding SSDs to the mix. If you want better write speed, add two mirrored SSDs as "Log" disks (ZFS Intent Log). For better read speed, fill up the box with as much memory as possible, and if that is not enough, add a single SSD as a L2ARC. In most cases, L2ARC is unnecessary though. Your best bet is to test different configurations and see what works better... one big raidz2 array with a hotspare, raid-10 like setup (stripe of mirrors) or many smaller raidz's to spread out the IO.

I shall +1 this and add:
I have 3 raidZ1 sub units in my pool.
I connect to the pool from my V-Servers and my PC with Fiber-Chanel, optical cables.
If you want a lot of bandwidth, I see >200Mb/S writes. ISCSI won't do that without 10GB ethernet
When I expand I have to add three disks at a time (by WWN as this is unique)

zpool add ZStore raidz WWN1 WWN2 WWN3

EDIT:
Mirrors would give faster writes than raidZ1 for the same disk number as you get a write operation per sub unit.
 
Last edited:
Asynchronous writes (sync = disabled) accomplishes the same without the need of a ZIL. Just need plenty of ram (which I have now) and a UPS as a safety net. I know Async is not recommended, but the nature of ZFS itself means that the chances of corrupted data being written because of a sudden power loss or crash, is next to nil, as all write transactions are complete transactions; it either completes 100%, or it doesn't. So setting everything to Async will save me the cost of SSD drives, and be faster then any SSD I could afford to use as a ZIL.

And 4 1Gb connections with MPIO = a 4Gb connection, which theoretically can transfer up to 500MBps, which should be almost possible with 14 drives in a RAID10 setup ;) (picked up some more drives the last couple of days as I'm almost ready to finally start setting up the system).

I'm also going with the Enterprise edition of the Lackrack till I can put something together. :D
 
Back