- Joined
- May 7, 2011
- Location
- Cook->Kent
Howdy folks.
I'm looking for those that may have more first hand experience here than I in this arena. I'm a network guy by trade, not explicitly a server guy, so I defer to those that work this side of the house. I say psuedo-advanced as this isn't really advanced in the strictest sense, but perhaps is for most home builds.
I currently have a server that I built as seen in my sig. I've acquired an old storage chassis with 180TB of storage. It's got a SuperMicro X9DRX with 2x E5-2687W CPUs, the memory I've already cannibalized for my server in the sig. It has 8x LSI 9207-8i cards for handling the storage, and I have some 10g NICs that I'll be popping in.
I'm undecided if I'm going to use this as a dedicated NAS box, or migrate from my other host, which is my VM host to this. I'm also looking to potentially upgrade to an X10DRX board, with yet to be decided CPUs.
That said, the question revolves more around the storage aspect of it. I'm not intending to use all 180TB as that's just ridiculous. I'm more curious to know if there are any obvious performance advantages of the 9207 cards in this capacity vs 24 drives off a single card. While I appreciate the obviousness that is it depends on the use case, for the purposes of this discussion, let's just say there's a single pool in use with mirrored vdevs.
I'm not concerned about SPOF and the like, this is purely a performance discussion.
That said, let's say I have 24 disks. Of those 24 disks, they're split into 12 vdev mirrors. I can either load these onto the single 9305-24i, or split each vdev across three of the 9207s. So, the main question is, any benefit to one over the other here?
Obviously if I have more disks, I either don't use the 9305 and use more of the 9207s, or I get another 9305, and split the vdevs across those.
I'd gather there are diminishing returns, plus the NUMA aspect of splitting these cards across multiple CPUs. Whether that has any practical effect...
Now, to add to this, I've also acquired 6x Intel 520 series 120GB SSDs that I may use for SLOG. That whole setup is TBD. Also, memory is undecided as it will depend on overall plan for the server. Currently, I have 64GB allocated to FreeNAS and no SLOG.
For the record, I currently am getting ~350MBps write performance across a 10Gb network, hence this discussion. This is across both pools of 3x and 4x 2-disk vdev mirrors with the WD reds listed in the sig, hence my intent to go wider with the hopes to get more performance. The disks I've acquired are 3TB HGST DK7SAD300s.
Also for the record, the whys are irrelevant. The HW is already here so this is part technical exercise, part hobby, part I want to.
If I'm missing any relevant info, please let me know.
I'm looking for those that may have more first hand experience here than I in this arena. I'm a network guy by trade, not explicitly a server guy, so I defer to those that work this side of the house. I say psuedo-advanced as this isn't really advanced in the strictest sense, but perhaps is for most home builds.
I currently have a server that I built as seen in my sig. I've acquired an old storage chassis with 180TB of storage. It's got a SuperMicro X9DRX with 2x E5-2687W CPUs, the memory I've already cannibalized for my server in the sig. It has 8x LSI 9207-8i cards for handling the storage, and I have some 10g NICs that I'll be popping in.
I'm undecided if I'm going to use this as a dedicated NAS box, or migrate from my other host, which is my VM host to this. I'm also looking to potentially upgrade to an X10DRX board, with yet to be decided CPUs.
That said, the question revolves more around the storage aspect of it. I'm not intending to use all 180TB as that's just ridiculous. I'm more curious to know if there are any obvious performance advantages of the 9207 cards in this capacity vs 24 drives off a single card. While I appreciate the obviousness that is it depends on the use case, for the purposes of this discussion, let's just say there's a single pool in use with mirrored vdevs.
I'm not concerned about SPOF and the like, this is purely a performance discussion.
That said, let's say I have 24 disks. Of those 24 disks, they're split into 12 vdev mirrors. I can either load these onto the single 9305-24i, or split each vdev across three of the 9207s. So, the main question is, any benefit to one over the other here?
Obviously if I have more disks, I either don't use the 9305 and use more of the 9207s, or I get another 9305, and split the vdevs across those.
I'd gather there are diminishing returns, plus the NUMA aspect of splitting these cards across multiple CPUs. Whether that has any practical effect...
Now, to add to this, I've also acquired 6x Intel 520 series 120GB SSDs that I may use for SLOG. That whole setup is TBD. Also, memory is undecided as it will depend on overall plan for the server. Currently, I have 64GB allocated to FreeNAS and no SLOG.
For the record, I currently am getting ~350MBps write performance across a 10Gb network, hence this discussion. This is across both pools of 3x and 4x 2-disk vdev mirrors with the WD reds listed in the sig, hence my intent to go wider with the hopes to get more performance. The disks I've acquired are 3TB HGST DK7SAD300s.
Also for the record, the whys are irrelevant. The HW is already here so this is part technical exercise, part hobby, part I want to.
If I'm missing any relevant info, please let me know.