- Joined
- Jan 24, 2006
- Location
- South Dakota
I thought I would post this in case anyone was going through the same decision process. A forewarning, this is not meant to be a cheap; It will be cheaper than buying pre-built systems but it is by no means a "budget" build.
PREAMBLE
So I work with OpenShift a lot, in fact for the last year and a half that and Ansible has pretty much been my world.
My current/previous setup has kind of evolved into its current incantation. Its a server with the following:
The drives are arranged in a ZFS pool of mirrors. Because it grew organically, these 4 drives host 20+ kvms, act as an nfs server (hosting package caches for clients, Download folders etc), as well as handles important backups. Given the age of the hardware I decided not to invest further in the current system as its hardware was released 6+ years ago now.
Software/OS considerations
While I fully endorse Red Hat and truly appreciate their products I don't want to deal with on going support costs. I know they have the Dev access subscription as well but I am disinclined to use these. So I need something that runs either KVM or VirtualBox (although KVM is the preference). I looked at the following:
FreeNAS
Open Media Vault
Ubuntu
Arch Linux
CentOS
After installing the most recent version of FreeNAS (10 BETA 2) I decided to rule this out because I am less familiar with BSD and I have read a bit on the forums about people dropping to the CLI for this or that. That's fine, but if I am going to install something with a webUI and then drop to the CLI for activities I probably would just opt to do a headless server.
The jury is still out for me ultimately. I tried out Open Media Vault 3 but I had it crash and start throwing errors after installing the ZFS plugin. I will update this as I consider it
Considering the new builds
I started considering my situation, which is quite a bit different now compared to a half decade ago. I run services like Plex (2 instances), Confluence, NextCloud, some usenet related processes, apt mirror, an ARK server, a NWN server, a Yum repo and so on. Given this, its obvious that a single computer would not do.
I considered an actual NAS product like a QNAP, Synology or the like. I ruled these out right away because if I am going to go with a dedicated NAS style computer I want to consolidate another 4 drives that I have in a RAID-Z1 which brings the total number of drives to 10 (4 in the RAIDZ, 4 in the mirrored set and 2 mirrored SSDs for the SLOG/ l2arc). Therefore, the off the shelf versions were out.
I also wanted to start moving to 10GBE because its always a chicken and egg, I don't have any 10G nics so why get a switch that supports 10G? I also wanted to provide decent network access to the NAS from the new servers considered.
Additionally, I wanted to reduce power consumption. The current AMD chips are around 115W TDP each, while I want some decent performance, I don't need leading edge, it doesn't take much to improve upon 6+ year old CPUs on spinning disks.
I considered ATOM processors but what I could find (for a similar price range) were coming to EOL soon. I needed enough power to saturate 10GBE, run a plex server and maybe one other VM. I also considered a Celeron, I3 and I5 chips, but that limited me to boards where I would have to add in the 10GBE which adds a significant expense if you want more than 1 port on the card. Additionally, while not required, I wanted to have ECC ram for the ZFS build which further pushed me towards "server" motherboards.
EDIT: Just caught this article over at The Register talking about the ATOM C2000 series chips bricking devices... Glad I opted out of the ATOM build
So to sum up the TLDR I wanted:
1) Lower power consumption
2) Faster network with 10GBE
3) Existing ZPOOLS to be used
4) Additional data protection with ECC
5) VMs and IOPS and the primary concern for the new setups
Planned Layout
The switches in the below are the Asus XG-U2008. I don't have a rack as I live in an apartment so I can't just toss them in the basement either.
Choosing the parts for the NAS
I initially called up iXsystems and got some info from them. I have bought from them in the past but this time I felt that while they offered extremely competitive prices for the offering, I wasn't looking for a full on Xeon system which is what they recommended. After much searching and discussion with work mates, I ended up settling on the following parts (Note the SOC is passively cooled so I plan on rigging up a 50MM or bigger fan to the heat sink):
Case: Fractal Node 804 ($140)
SOC: Super Micro X10SDV-4C-TLN2F-O ($560)
SATA/SAS Card: SUPERMICRO AOC-SAS2LP-MV8 (8 Port/ 2 mini-sas ports that break out into 4 ports each) ($210)
2 X RAM: Kingston Value 16GB KVR24R17D4/16 ($144 per stick)
PSU: EVGA SuperNOVA 650 G1 120-G1-0650-XR ($110)
Case Fans: COOLER MASTER R4-S2S-124K-GP ($17)
Fan for CPU: StarTech FAN5X1TX3 50mm ($17)
2 X Break out cables: CableDeconn Mini SAS 36P SFF-8087 to 4 SATA 7Pin 90 Degrees ($17 each)
m.2 SATA Boot drive: Intel SSD 540 Series 120GB M.2 ($45)
Note: Prices are in CAD
Total(ish): $1425
Things have been ordered and I am just waiting for them to arrive.
I thought I would do an inventory of the parts that had arrived. Here is what the case looks like:
And the ram, breakout cables and the M.2 drive:
The fans, and SAS card
Here is a shot after I took off the heat sink and cleared away the default gunk. (I have read several places that replacing the default gunk with high quality stuff can reduce temps by up to 10C)
Here is the "fully assembled" front. You can see I have ghetto'd in both the SSDs and the fan on the CPU. The case technically has a spot for 2 SSDs and I would have used them except that after I installed the SSDs I couldn't plug them in. A combination of not having any straight SATA power (they were all 90 degrees ) and the fact that the SSDs were only an inch or two from the edge of the case meant cabling them was actually impossible for the powersupply/cables I had available. I had to flip them upside down and tie them to the case. This was problematic for the dust filter on the bottom but I eventually got that back on as well
Finally some shots of the back side of the case. I had to bend the power cables because of the small amount of clearance from the PSU to the drives themselves. not a big deal but worth noting.
PREAMBLE
So I work with OpenShift a lot, in fact for the last year and a half that and Ansible has pretty much been my world.
My current/previous setup has kind of evolved into its current incantation. Its a server with the following:
Code:
System: Host: stratus-server Kernel: 3.10.0-514.2.2.el7.x86_64 x86_64 (64 bit)
Distro: CentOS Linux release 7.3.1611 (Core)
Machine: Device: desktop Mobo: ASUSTeK model: KGP(M)E-D16 v: Rev 1.xxG serial: AT1G22021900174
CPU1: AMD Opteron 6276 (-HT-MCP-) speed: 2300 MHz (max)
CPU2: AMD Opteron 6276 (-HT-MCP-) speed: 2300 MHz (max)
Network: Card-1: Intel 82574L Gigabit Network Connection driver: e1000e
Card-2: Intel 82574L Gigabit Network Connection driver: e1000e
Memory: Used/Total: 55390.1/96502.1MB
Drives: HDD Total Size: 14241.4GB (5.1% used)
ID-1: /dev/sda model: ST3000VN000 size: 3000.6GB
ID-2: /dev/sdb model: ST3000VN000 size: 3000.6GB
ID-3: /dev/sdc model: ST3000VN000 size: 3000.6GB
ID-4: /dev/sdd model: ST3000VN000 size: 3000.6GB
ID-5: /dev/sde model: Logical_Volume size: 1999.0GB (Mirrored in the raid card)
ID-6: /dev/sdg model: ADATA_SP580 size: 120.0GB
ID-7: /dev/sdf model: ADATA_SP580 size: 120.0GB
The drives are arranged in a ZFS pool of mirrors. Because it grew organically, these 4 drives host 20+ kvms, act as an nfs server (hosting package caches for clients, Download folders etc), as well as handles important backups. Given the age of the hardware I decided not to invest further in the current system as its hardware was released 6+ years ago now.
Software/OS considerations
While I fully endorse Red Hat and truly appreciate their products I don't want to deal with on going support costs. I know they have the Dev access subscription as well but I am disinclined to use these. So I need something that runs either KVM or VirtualBox (although KVM is the preference). I looked at the following:
FreeNAS
Open Media Vault
Ubuntu
Arch Linux
CentOS
After installing the most recent version of FreeNAS (10 BETA 2) I decided to rule this out because I am less familiar with BSD and I have read a bit on the forums about people dropping to the CLI for this or that. That's fine, but if I am going to install something with a webUI and then drop to the CLI for activities I probably would just opt to do a headless server.
The jury is still out for me ultimately. I tried out Open Media Vault 3 but I had it crash and start throwing errors after installing the ZFS plugin. I will update this as I consider it
Considering the new builds
I started considering my situation, which is quite a bit different now compared to a half decade ago. I run services like Plex (2 instances), Confluence, NextCloud, some usenet related processes, apt mirror, an ARK server, a NWN server, a Yum repo and so on. Given this, its obvious that a single computer would not do.
I considered an actual NAS product like a QNAP, Synology or the like. I ruled these out right away because if I am going to go with a dedicated NAS style computer I want to consolidate another 4 drives that I have in a RAID-Z1 which brings the total number of drives to 10 (4 in the RAIDZ, 4 in the mirrored set and 2 mirrored SSDs for the SLOG/ l2arc). Therefore, the off the shelf versions were out.
I also wanted to start moving to 10GBE because its always a chicken and egg, I don't have any 10G nics so why get a switch that supports 10G? I also wanted to provide decent network access to the NAS from the new servers considered.
Additionally, I wanted to reduce power consumption. The current AMD chips are around 115W TDP each, while I want some decent performance, I don't need leading edge, it doesn't take much to improve upon 6+ year old CPUs on spinning disks.
I considered ATOM processors but what I could find (for a similar price range) were coming to EOL soon. I needed enough power to saturate 10GBE, run a plex server and maybe one other VM. I also considered a Celeron, I3 and I5 chips, but that limited me to boards where I would have to add in the 10GBE which adds a significant expense if you want more than 1 port on the card. Additionally, while not required, I wanted to have ECC ram for the ZFS build which further pushed me towards "server" motherboards.
EDIT: Just caught this article over at The Register talking about the ATOM C2000 series chips bricking devices... Glad I opted out of the ATOM build
So to sum up the TLDR I wanted:
1) Lower power consumption
2) Faster network with 10GBE
3) Existing ZPOOLS to be used
4) Additional data protection with ECC
5) VMs and IOPS and the primary concern for the new setups
Planned Layout
The switches in the below are the Asus XG-U2008. I don't have a rack as I live in an apartment so I can't just toss them in the basement either.
Choosing the parts for the NAS
I initially called up iXsystems and got some info from them. I have bought from them in the past but this time I felt that while they offered extremely competitive prices for the offering, I wasn't looking for a full on Xeon system which is what they recommended. After much searching and discussion with work mates, I ended up settling on the following parts (Note the SOC is passively cooled so I plan on rigging up a 50MM or bigger fan to the heat sink):
Case: Fractal Node 804 ($140)
SOC: Super Micro X10SDV-4C-TLN2F-O ($560)
SATA/SAS Card: SUPERMICRO AOC-SAS2LP-MV8 (8 Port/ 2 mini-sas ports that break out into 4 ports each) ($210)
2 X RAM: Kingston Value 16GB KVR24R17D4/16 ($144 per stick)
PSU: EVGA SuperNOVA 650 G1 120-G1-0650-XR ($110)
Case Fans: COOLER MASTER R4-S2S-124K-GP ($17)
Fan for CPU: StarTech FAN5X1TX3 50mm ($17)
2 X Break out cables: CableDeconn Mini SAS 36P SFF-8087 to 4 SATA 7Pin 90 Degrees ($17 each)
m.2 SATA Boot drive: Intel SSD 540 Series 120GB M.2 ($45)
Note: Prices are in CAD
Total(ish): $1425
Things have been ordered and I am just waiting for them to arrive.
I thought I would do an inventory of the parts that had arrived. Here is what the case looks like:
And the ram, breakout cables and the M.2 drive:
The fans, and SAS card
Here is a shot after I took off the heat sink and cleared away the default gunk. (I have read several places that replacing the default gunk with high quality stuff can reduce temps by up to 10C)
Here is the "fully assembled" front. You can see I have ghetto'd in both the SSDs and the fan on the CPU. The case technically has a spot for 2 SSDs and I would have used them except that after I installed the SSDs I couldn't plug them in. A combination of not having any straight SATA power (they were all 90 degrees ) and the fact that the SSDs were only an inch or two from the edge of the case meant cabling them was actually impossible for the powersupply/cables I had available. I had to flip them upside down and tie them to the case. This was problematic for the dust filter on the bottom but I eventually got that back on as well
Finally some shots of the back side of the case. I had to bend the power cables because of the small amount of clearance from the PSU to the drives themselves. not a big deal but worth noting.
Last edited: