- Joined
- Oct 14, 2007
How loud is that rack? Can you tolerate being in the same room?
Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!
I'll probably have to test each server individually, but I can find out. I'm going to guess the entire rack idle is 650-800w. I'll need to know this number for when I buy a UPS, anyway.Ah, the prettiness... Was just wondering how much W the whole rack draws. I've always wanted a proper home server to fiddle around with, but the initial costs + the power costs keep me away, hehe.
That's why I play around only with a dual NIC Shuttle SB52G2
Ah, the prettiness... Was just wondering how much W the whole rack draws. I've always wanted a proper home server to fiddle around with, but the initial costs + the power costs keep me away, hehe.
That's why I play around only with a dual NIC Shuttle SB52G2
Tbh, its not a whole lot to get something to play around with going. Me personally im going to be starting a work log for my server build this weekend and you will be stunned that i probably have a grand total (not a grand ) of about 300 bucks? if that into my server rack build so far
Yeah if you're lucky enough to live in civilization
I happen to be on a 100Mbit optical connection
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 4
Actor Key: 17
Partner Key: 4
Is it a firewall that you bonded? Are you doing a NLB?
The other four gigabit NICs are going to be aggregated into one connection (mode=4, 802.3ad) to give me around 465 mb/sec total throughput.
Thinking of future expansion, I'm considering getting a HBA, selling my RAID card and letting mdadm or ZFS take over. I'm really liking the latter, but the memory requirements are worrying and conflicting. Some websites say 1gb per TB, others say 2gb, and I even saw one that said 5gb! An alternative would be to assign a SSD as a scratch disk, from what I'm reading.
If anyone has input on ZFS or suggestions for a HBA, let me know.
Aren't those just RAID controllers? If I switch to ZFS, I want it to handle parity.
I'm not interested in switching right this second, so keep me updated on how that drive helps.
The IBM M1015 is popular for ZFS, with IT firmware iirc. I've used the Intel SAS8uc8i as well. When it comes to RAM, more's better, I ran 16tb array with 16Gb of RAM and maxed out gigabit. I did a few tweaks setting the zfs cache at 12gb iirc
That said, when I build another I will definitely add SSD for cache, given the relatively low cost of SSDs at current time.