- Joined
- Jan 14, 2011
I'm going to do raid 10 across 8 disks. with 4x raid1
poolsize = vdev0+vdev1+vdev2+vdev3
whats good with zfs is that i can run different size vdevs. so i dont have to upgrade ALL drives, i can do 2 at a time.
So i'll start with 8TB across the 4x 2TB raid 1s
so in the future say I can pick up 2x 8TB drives at a good price
pop one drive out of vdev0, put in 8TB, then rebuild (copy, fast, no parity calc)
pop in second 8TB drive, rebuild, expand.
then ill be left with 2+2+2+8 = 14TB. and so on.
say instead i do 2x Raid5 with 8 disks. this needs 2x 4disk raid5 (raidz) vdevs. with both in the pool, it essentially becomes raid 50.
poolsize = 6TB+6TB = 12TB. i have 2 disk redundancy but only if they fail in separate vdevs.
rebuilds are significantly more time consuming calculating parity.
upgrading drives has to be done 4 at a time.
replace drive 1, rebuild whole array
replace drive 2, rebuild whole array
replace drive 3, rebuild whole array
replace drive 4, rebuild whole array
expand.
that a lot of stress on the drives. and that stress increases the probability of failure for the rebuild. then i lose everything.
and the striped 4xmirrors should be faster than the striped 2xraid5.
i dunno i'm still not set in stone but i think i like 10 better.
that's why they say raid is not a backup, you should always backup to an external storage device. it does indeed take a long time to rebuild a raid 5 though, going from 3x2tb to 4x2tb took 3 days, but thanks to my controllers online raid expansion nothing went offline and i was still able to cap my gigabit connection reading from it while it was expanding.
why do you feel the need to do two separate arrays then
not trying to talk you out of doing what you are doing just trying to understand it.
Last edited: