• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Psuedo-advanced storage server build

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Railgun

Member
Joined
May 7, 2011
Location
Cook->Kent
Howdy folks.

I'm looking for those that may have more first hand experience here than I in this arena. I'm a network guy by trade, not explicitly a server guy, so I defer to those that work this side of the house. I say psuedo-advanced as this isn't really advanced in the strictest sense, but perhaps is for most home builds.

I currently have a server that I built as seen in my sig. I've acquired an old storage chassis with 180TB of storage. It's got a SuperMicro X9DRX with 2x E5-2687W CPUs, the memory I've already cannibalized for my server in the sig. It has 8x LSI 9207-8i cards for handling the storage, and I have some 10g NICs that I'll be popping in.

I'm undecided if I'm going to use this as a dedicated NAS box, or migrate from my other host, which is my VM host to this. I'm also looking to potentially upgrade to an X10DRX board, with yet to be decided CPUs.

That said, the question revolves more around the storage aspect of it. I'm not intending to use all 180TB as that's just ridiculous. I'm more curious to know if there are any obvious performance advantages of the 9207 cards in this capacity vs 24 drives off a single card. While I appreciate the obviousness that is it depends on the use case, for the purposes of this discussion, let's just say there's a single pool in use with mirrored vdevs.

I'm not concerned about SPOF and the like, this is purely a performance discussion.

That said, let's say I have 24 disks. Of those 24 disks, they're split into 12 vdev mirrors. I can either load these onto the single 9305-24i, or split each vdev across three of the 9207s. So, the main question is, any benefit to one over the other here?

Obviously if I have more disks, I either don't use the 9305 and use more of the 9207s, or I get another 9305, and split the vdevs across those.

I'd gather there are diminishing returns, plus the NUMA aspect of splitting these cards across multiple CPUs. Whether that has any practical effect...

Now, to add to this, I've also acquired 6x Intel 520 series 120GB SSDs that I may use for SLOG. That whole setup is TBD. Also, memory is undecided as it will depend on overall plan for the server. Currently, I have 64GB allocated to FreeNAS and no SLOG.

For the record, I currently am getting ~350MBps write performance across a 10Gb network, hence this discussion. This is across both pools of 3x and 4x 2-disk vdev mirrors with the WD reds listed in the sig, hence my intent to go wider with the hopes to get more performance. The disks I've acquired are 3TB HGST DK7SAD300s.

Also for the record, the whys are irrelevant. The HW is already here so this is part technical exercise, part hobby, part I want to.

If I'm missing any relevant info, please let me know.
 
I'd just use the 9305 and put all the disks on that. 3 6gb controllers vs 1 12gb, guessing the 12 wins. Not sure it really matters though with 24 disks as thats about how many it take to saturate 6gb with spinners
 
Hi,

In my opinion you need to look at it, as if you were the data travelling through your system. It will be workflow dependent, when I was using spinning and ssd disks I would use multiple controllers a couple of LSI 8 channel and Areca 24 channel and split my data into different Raid-0

Run a few test, it will be interesting to see the results
 
In an initial test, I used 15, 2-disk mirror vdevs and it reported 30GBps reads across all the 9207s. I’m not convinced this is overly representative of reality. That was a default setup. I’ve since stripped this down as this server is a literal jet liner when running (it’s an old Scalable Infomatics Jackrabbit).

I’ll need to set it up again and do some additional tests. I also need to bring my main server down to borrow the bigger LSI card to test. I also have several SSDs to play with to test different caching setups.

I’ll initially have 10Gb to the box and depending on whether I get an additional card on my PC, it may or may not grow.

Workflow is somewhat irrelevant at the moment as this is only a technical exercise.
 
Wow 30Gb/s is impressive but as you said, is it realistic?

How did you test it? With what?
 
Back