• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

[Build Log] Building out a home lab

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Stratus_ss

Overclockix Snake Charming Senior, Alt OS Content
Joined
Jan 24, 2006
Location
South Dakota
I thought I would post this in case anyone was going through the same decision process. A forewarning, this is not meant to be a cheap; It will be cheaper than buying pre-built systems but it is by no means a "budget" build.

PREAMBLE

So I work with OpenShift a lot, in fact for the last year and a half that and Ansible has pretty much been my world.

My current/previous setup has kind of evolved into its current incantation. Its a server with the following:

Code:
System:    Host: stratus-server Kernel: 3.10.0-514.2.2.el7.x86_64 x86_64 (64 bit)
                 Distro: CentOS Linux release 7.3.1611 (Core)
Machine:   Device: desktop Mobo: ASUSTeK model: KGP(M)E-D16 v: Rev 1.xxG serial: AT1G22021900174
CPU1:       AMD Opteron 6276 (-HT-MCP-) speed: 2300 MHz (max)
CPU2:       AMD Opteron 6276 (-HT-MCP-) speed: 2300 MHz (max)
Network:   Card-1: Intel 82574L Gigabit Network Connection driver: e1000e
              Card-2: Intel 82574L Gigabit Network Connection driver: e1000e
Memory:    Used/Total: 55390.1/96502.1MB
Drives:      HDD Total Size: 14241.4GB (5.1% used)
            ID-1: /dev/sda model: ST3000VN000 size: 3000.6GB
            ID-2: /dev/sdb model: ST3000VN000 size: 3000.6GB
            ID-3: /dev/sdc model: ST3000VN000 size: 3000.6GB
            ID-4: /dev/sdd model: ST3000VN000 size: 3000.6GB
            ID-5: /dev/sde model: Logical_Volume size: 1999.0GB (Mirrored in the raid card)
            ID-6: /dev/sdg model: ADATA_SP580 size: 120.0GB
            ID-7: /dev/sdf model: ADATA_SP580 size: 120.0GB

The drives are arranged in a ZFS pool of mirrors. Because it grew organically, these 4 drives host 20+ kvms, act as an nfs server (hosting package caches for clients, Download folders etc), as well as handles important backups. Given the age of the hardware I decided not to invest further in the current system as its hardware was released 6+ years ago now.

Software/OS considerations

While I fully endorse Red Hat and truly appreciate their products I don't want to deal with on going support costs. I know they have the Dev access subscription as well but I am disinclined to use these. So I need something that runs either KVM or VirtualBox (although KVM is the preference). I looked at the following:

FreeNAS
Open Media Vault
Ubuntu
Arch Linux
CentOS

After installing the most recent version of FreeNAS (10 BETA 2) I decided to rule this out because I am less familiar with BSD and I have read a bit on the forums about people dropping to the CLI for this or that. That's fine, but if I am going to install something with a webUI and then drop to the CLI for activities I probably would just opt to do a headless server.

The jury is still out for me ultimately. I tried out Open Media Vault 3 but I had it crash and start throwing errors after installing the ZFS plugin. I will update this as I consider it



Considering the new builds

I started considering my situation, which is quite a bit different now compared to a half decade ago. I run services like Plex (2 instances), Confluence, NextCloud, some usenet related processes, apt mirror, an ARK server, a NWN server, a Yum repo and so on. Given this, its obvious that a single computer would not do.

I considered an actual NAS product like a QNAP, Synology or the like. I ruled these out right away because if I am going to go with a dedicated NAS style computer I want to consolidate another 4 drives that I have in a RAID-Z1 which brings the total number of drives to 10 (4 in the RAIDZ, 4 in the mirrored set and 2 mirrored SSDs for the SLOG/ l2arc). Therefore, the off the shelf versions were out.

I also wanted to start moving to 10GBE because its always a chicken and egg, I don't have any 10G nics so why get a switch that supports 10G? I also wanted to provide decent network access to the NAS from the new servers considered.

Additionally, I wanted to reduce power consumption. The current AMD chips are around 115W TDP each, while I want some decent performance, I don't need leading edge, it doesn't take much to improve upon 6+ year old CPUs on spinning disks.

I considered ATOM processors but what I could find (for a similar price range) were coming to EOL soon. I needed enough power to saturate 10GBE, run a plex server and maybe one other VM. I also considered a Celeron, I3 and I5 chips, but that limited me to boards where I would have to add in the 10GBE which adds a significant expense if you want more than 1 port on the card. Additionally, while not required, I wanted to have ECC ram for the ZFS build which further pushed me towards "server" motherboards.

EDIT: Just caught this article over at The Register talking about the ATOM C2000 series chips bricking devices... Glad I opted out of the ATOM build

So to sum up the TLDR I wanted:

1) Lower power consumption
2) Faster network with 10GBE
3) Existing ZPOOLS to be used
4) Additional data protection with ECC
5) VMs and IOPS and the primary concern for the new setups


Planned Layout


The switches in the below are the Asus XG-U2008. I don't have a rack as I live in an apartment so I can't just toss them in the basement either.


Blank Diagram - Page 1.png



Choosing the parts for the NAS


I initially called up iXsystems and got some info from them. I have bought from them in the past but this time I felt that while they offered extremely competitive prices for the offering, I wasn't looking for a full on Xeon system which is what they recommended. After much searching and discussion with work mates, I ended up settling on the following parts (Note the SOC is passively cooled so I plan on rigging up a 50MM or bigger fan to the heat sink):


Case:
Fractal Node 804 ($140)
SOC: Super Micro X10SDV-4C-TLN2F-O ($560)
SATA/SAS Card: SUPERMICRO AOC-SAS2LP-MV8 (8 Port/ 2 mini-sas ports that break out into 4 ports each) ($210)
2 X RAM: Kingston Value 16GB KVR24R17D4/16 ($144 per stick)
PSU: EVGA SuperNOVA 650 G1 120-G1-0650-XR ($110)
Case Fans: COOLER MASTER R4-S2S-124K-GP ($17)
Fan for CPU: StarTech FAN5X1TX3 50mm ($17)
2 X Break out cables: CableDeconn Mini SAS 36P SFF-8087 to 4 SATA 7Pin 90 Degrees ($17 each)
m.2 SATA Boot drive: Intel SSD 540 Series 120GB M.2 ($45)

Note: Prices are in CAD

Total(ish): $1425

Things have been ordered and I am just waiting for them to arrive.

I thought I would do an inventory of the parts that had arrived. Here is what the case looks like:

factal_node_side_panel_zps1f8p2hxs.jpg

factal_node_rear_panel_zpsox2oat8w.jpg

fractal_node_backside_zps2cew4dep.jpg

fractal_node_inside_mobo_zpsutl5qtok.jpg


And the ram, breakout cables and the M.2 drive:

ram_breakout_cables_m_2_zpsj8dw24qj.jpg


The fans, and SAS card

SAS_card_zpss6oqeh0t.jpg

fans_zpsljwkabtv.jpg



Here is a shot after I took off the heat sink and cleared away the default gunk. (I have read several places that replacing the default gunk with high quality stuff can reduce temps by up to 10C)

IMG_20170225_112421_zpsym9i96c4.jpg


Here is the "fully assembled" front. You can see I have ghetto'd in both the SSDs and the fan on the CPU. The case technically has a spot for 2 SSDs and I would have used them except that after I installed the SSDs I couldn't plug them in. A combination of not having any straight SATA power (they were all 90 degrees ) and the fact that the SSDs were only an inch or two from the edge of the case meant cabling them was actually impossible for the powersupply/cables I had available. I had to flip them upside down and tie them to the case. This was problematic for the dust filter on the bottom but I eventually got that back on as well

IMG_20170225_141741_zpscfk3otez.jpg


Finally some shots of the back side of the case. I had to bend the power cables because of the small amount of clearance from the PSU to the drives themselves. not a big deal but worth noting.

IMG_20170225_135113_zpswxt7lizb.jpg

IMG_20170225_135032_zpsw0ewdqvd.jpg
 
Last edited:
Here are the Bonnie++ comparison. These are from a compressed section which uses LZ4. Since I literally lifted the pool and put it in a different location, the amount of space free on the pool remained constant.

Screenshot from 2017-02-26 19-58-05.png

What is unclear is what gains were from moving to a newer kernel (4.8.0-39-generic vs 3.10.0-514.6.1.el7.x86_64) and how much of it was related to the actual hardware change. I will consider installing CentOS 7 on a different SSD if there is significant enough interest
 
Last edited:
New SOC arrives for the NAS...

So I finally got the new SOC in. I plugged it in and.... nothing! Damnit, did I get another DOA? On the suggestion of my wife, I pulled a spare PSU out of the closet. After I plugged that in, the board fired right up and posted.

To be clear, the defective PSU was powering the board but the system would not post. Fans spun, the LED on the board lit up, but no signal out of the video card.

I called NewEgg and have an "Advanced Replacement" PSU on the way. I decided to go ahead with the spare PSU even though it doesn't have enough SATA power for the full build, I wanted to get something up and running.


Picking the OS...

With a working PSU in place I finished the build and installed the OS. I had been debating which OS to use and I landed on Ubuntu server 16.04. This was a tough one for me. I took a look at OpenMedia Vault but when I installed the ZFS module to import my existing pools, it errored out and crashed. So that was a no go. I looked at FreeNas 9, but after reading about their jail implementation I ruled that out. From my understanding they had to do some jenky work to get things to work properly and the developers weren't happy with this. I discovered this when researching FreeNas 10 and jails.

I looked at FreeNas 10, they are implementing some sort of docker support which is ultimately not what I want. I have a docker infrastructure and it didnt make sense to me to split it across platform types, especially since, to my knowledge, docker is either not native to BSD or its in a fledgeling state (I couldn't tell which).

That left me with CentOS or Ubuntu. Up to 2 weeks ago I thought I had settled on CentOS. I mirror the repos locally, I have most of my vms and vm hosts running CentOS and I have a vested interest in the ecosystem. However, I had multiple problems with one of my VM hosts the past week that dissuaded me from going that route.

The first problem is a recurring one, kernel or zfs update and the module doesnt get rebuilt properly. Ultimately, not the biggest problem in the world. Its annoying but a known thing and easily(ish) dealt with. But more importantly, for whatever reason an update made the ext2 drivers unavailable... bizzare! This meant that my system only booted into rescue mode because it didn't have a /boot. I had to rebuilt the initramfs create a temporary boot partition, reconfigure grub, reboot, mount my actual boot partition and (I thought), copy the files into the new partition. However, doing this resulted in an unbootable system again.

I ended up creating an XFS /boot partition and copying the files over and everything is happy. What the hell happened I don't know. I couldn't figure it out after poking at it for a while. I have never had this happen before on any system let alone CentOS. This by itself would not have stopped me from using CentOS. However, combining the two issues coupled with the fact that Ubuntu comes with ZFS support from Canonical in server 16.04, cinched it for me.

I will post the rest of the build pictures today as well as some of the benchmarks I ran before and after. Keeping in mind the disks are the same but now they are on a SAS card with current(ish) gen cpus, and DDR4 instead of 7 year old hardware with DDR3. It will be interesting to see if there are any differences


Reflecting upon the case

Actually installing the parts into the Fractal Node 804 was a bit of an interesting challenge. I had heard other reviewers say similar things. One guy said "I would like to say it was an act of love... it was just an act". Thats pretty close. I like the form factor and overall I like the case, but it was hard to work in. It took me far longer to install components than I would have thought. Part of that is because I don't have small hands, and part of it was probably being a novice with this case.

I did have a pleasant surprise with the case: its fan controller. So initially I did not ghetto on a fan to the heat sink, but while I was testing the PSUs I noticed the heat sink was too hot to touch. That made me uncomfortable. I wasn't totally surprised at this as I had read other reviews of this SOC that mentioned the CPU getting "toasty". I did not have the OS installed and the bios does not have a hardware monitor that I could find so I am not sure exactly how hot it was, but I would guess in the 75-85C range.

I ghetto'd a 50mm "silent" fan I bought off NewEgg onto the board. The fan controller has a switch on the outside (rear) of the case with Low, Med and High settings. I have 2 X 120 mm fans pulling from the front and exhausting over the cpu and out the back as well as the 50mm case fan on the fan controller. The temps are below

Low: 51C IDLE
Med: 39C IDLE
High: 34C IDLE

Since I have 3 computers in the immediate area its hard to get an actual sense of the noise, but I can tell you that at low I couldn't hear the fans above the other ambiance noise. At medium they were audible but not more than the other computers. I definitely noticed the 50mm fan on high settings. I opted to set it to medium since SuperMicro is obviously comfortable shipping this unit as passively cooled and I was only looking to make sure I didnt burn myself or any cables that may inadvertently touch the heatsink over time.

So far so good. when I get more time to play with it I will post more info.

If anyone has any specific tests they want me to run, I am more than happy to
 
Last edited:
Back