• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
It is in the basement, so I don't hear it at all. I could certainly be in the same room, but it might be grating after awhile. At my previous residence, my computer was close to the rack. I got used to it.
 
Ah, the prettiness... Was just wondering how much W the whole rack draws. I've always wanted a proper home server to fiddle around with, but the initial costs + the power costs keep me away, hehe.

That's why I play around only with a dual NIC Shuttle SB52G2 :)
 
Ah, the prettiness... Was just wondering how much W the whole rack draws. I've always wanted a proper home server to fiddle around with, but the initial costs + the power costs keep me away, hehe.

That's why I play around only with a dual NIC Shuttle SB52G2 :)
I'll probably have to test each server individually, but I can find out. I'm going to guess the entire rack idle is 650-800w. I'll need to know this number for when I buy a UPS, anyway.
 
Ah, the prettiness... Was just wondering how much W the whole rack draws. I've always wanted a proper home server to fiddle around with, but the initial costs + the power costs keep me away, hehe.

That's why I play around only with a dual NIC Shuttle SB52G2 :)

Tbh, its not a whole lot to get something to play around with going. Me personally im going to be starting a work log for my server build this weekend and you will be stunned that i probably have a grand total (not a grand :p) of about 300 bucks? if that into my server rack build so far
 
Tbh, its not a whole lot to get something to play around with going. Me personally im going to be starting a work log for my server build this weekend and you will be stunned that i probably have a grand total (not a grand :p) of about 300 bucks? if that into my server rack build so far

Yeah if you're lucky enough to live in civilization, I suppose. Some of us are less lucky :D
Either way, I'm quite happy with with my Shuttle that I got for the amazing price of $FREE and only had to dish out some cash to get 2Gb of DDR mem and boy are they hard to find for the cheap... only thing keeping this shuttle for becoming a kick rear firewall/proxy is the fact that one of the NICes are 100Mbit only... and I happen to be on a 100Mbit optical connection with three PCs behind the current router. Not enough bandwidth...

Anyway, terribly sorry for the offtopic... taking my seat and enjoying the show :popcorn:
 
Just got the NICs bonded in the server. I'm not seeing the performance I want, but it is still pretty good. I'm sitting at a live transfer rate of 224mb/sec.

Code:
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 4
    Actor Key: 17
    Partner Key: 4
 
Thinking of future expansion, I'm considering getting a HBA, selling my RAID card and letting mdadm or ZFS take over. I'm really liking the latter, but the memory requirements are worrying and conflicting. Some websites say 1gb per TB, others say 2gb, and I even saw one that said 5gb! An alternative would be to assign a SSD as a scratch disk, from what I'm reading.

If anyone has input on ZFS or suggestions for a HBA, let me know.
 
Thinking of future expansion, I'm considering getting a HBA, selling my RAID card and letting mdadm or ZFS take over. I'm really liking the latter, but the memory requirements are worrying and conflicting. Some websites say 1gb per TB, others say 2gb, and I even saw one that said 5gb! An alternative would be to assign a SSD as a scratch disk, from what I'm reading.

If anyone has input on ZFS or suggestions for a HBA, let me know.

The IBM M1015 is popular for ZFS, with IT firmware iirc. I've used the Intel SAS8uc8i as well. When it comes to RAM, more's better, I ran 16tb array with 16Gb of RAM and maxed out gigabit. I did a few tweaks setting the zfs cache at 12gb iirc

That said, when I build another I will definitely add SSD for cache, given the relatively low cost of SSDs at current time.
 
Aren't those just RAID controllers? If I switch to ZFS, I want it to handle parity.

I'm not interested in switching right this second, so keep me updated on how that drive helps.
 
Aren't those just RAID controllers? If I switch to ZFS, I want it to handle parity.

I'm not interested in switching right this second, so keep me updated on how that drive helps.

You flash them with IT firmware so they're just dummy SATA controllers, no onboard calculations.
 
Oh, that seems like an expensive use for a RAID controller. I'm doing this mainly for compatibility reasons for newer drives (4kb sectors and whatever comes after that). Having the RAW SATA/SAS ports seems to bypass this completely.
 
The IBM M1015 is popular for ZFS, with IT firmware iirc. I've used the Intel SAS8uc8i as well. When it comes to RAM, more's better, I ran 16tb array with 16Gb of RAM and maxed out gigabit. I did a few tweaks setting the zfs cache at 12gb iirc

That said, when I build another I will definitely add SSD for cache, given the relatively low cost of SSDs at current time.


I replaced both of my Supermicro controllers with the IBM version of the SAS8.

ZERO issues, no firmware update and the cards see ALL my drives with no problems. Prior to switching, the Supermicro controllers were "dropping" drives randomly.
 
I was reading every page until my eyes started to hurt; so what is your purpose for the servers? :]
 
File server: Storing files, sharing files, backups/archives and virtual machines are the top ones.
IBM x3650s: Virtual machines
Dell 2650: Icinga monitoring server, database server
 
Back