• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Worked on getting the cameras up today. I had to order special CAT6a ends because the Hitachi cable they use is GIGANTIC. I've never seen ethernet this thick. Outer diameter is 1/3" and uses 23 AWG wire on the inside. It has a special "sled" to line up the cables since the wires are not in the same plane within the connector (they are above and below the ones next to them). The first end I put on tested OK, but the switch said there was no device connected. Checked again and one of the wires was flaky. Put a new end on it, and it seems to be working OK.


However, it wasn't smooth sailing the whole way. I started with the garage camera first, which worked great. Once I hooked up the front door camera and enabled PoE, the wireless AP and the PoE switch dropped instantly. Went down to the rack and saw the switch was restarting. It would show a pulsing white light (booting), followed by a very short blue pulse (blue is running OK) before the light went out and back to flashing white, over and over. The switch was restarting constantly. I unplugged the front door's camera wire and everything comes up. Plug in the front door and before the cable could even click in the ethernet port, BAM RESTART.


At this point, I figured there was a short somewhere in the wires, and was really hoping it wasn't in the walls. However, it was a little confusing because if the cables were shorted out (someone put a nail or screw through it), my testing tool should either show multiple wires light up at the same time or show it as dim or completely out. Both the front door and garage cables were testing perfectly.


Swapped the front door cable with the garage cable, and it worked. This tells me each cable individually is fine, but when both were connected, the switch was restarting. Got a hunch about the patch panel cable touching, so I unconnected everything, and ripped out the patch panel. After pulling the keystone for the front door, I noticed the wires on one side were sticking out slightly.


With how the keystone works, the ends of the next keystone can touch if they are too long. I trimmed the ends, put it back together and everything works like magic. I did this with all the keystones, and my network seems to be working a lot better. The wireless AP wasn't connecting at gigabit speeds, but is now after I fixed them.


Ugg.


LOOK AT THIS THING. Thats CAT6 on the left, which is already pretty big! The stuff on the right is Hitachi 6A Supra. Upside is it is rated for 10gig at 100m.
IMG_20180810_162033.jpg


Drilled some holes for the camera.
IMG_20180810_135155.jpg


Terminated the cable and put the waterproof seal on the cable, which somehow stretches over the connector without breaking.
IMG_20180810_135156.jpg


(Poorly) put some sealer on the mount to keep it fairly waterproof.
IMG_20180810_135905.jpg




Bolted the mount up and pushed the extra cable back in the wall.
IMG_20180810_140454.jpg




Before mounting the whole camera up, I did a system test to make sure the thing works.
IMG_20180810_144334.jpg




All good, so put the whole thing up and aimed it.
IMG_20180810_154108.jpg
 
love the new place and what your doing with it.. love the subaru too although i'm an outback guy myself. what's the pfsense build you put together consume for power.. i put together one myself using a dell optiplex 990 sff, even with 2 4 port nics it still basically sips power. just swapped over my drives to the new file server which is a lenovo thinkserver ts430. one of these years i will get it all in the same room.

love the POE camera's too. i picked up a couple of arlo q plus POE camera's.only finicky when the power glitches.
 
love the new place and what your doing with it.. love the subaru too although i'm an outback guy myself. what's the pfsense build you put together consume for power.. i put together one myself using a dell optiplex 990 sff, even with 2 4 port nics it still basically sips power. just swapped over my drives to the new file server which is a lenovo thinkserver ts430. one of these years i will get it all in the same room.

love the POE camera's too. i picked up a couple of arlo q plus POE camera's.only finicky when the power glitches.
I think I measured it around 15w or so. It's pretty low.
 
Been awhile since an update here. I've got a ton done on the house, but only a little bit related to servers or the network. I installed pfBlockerNG on the pfsense for network-wide ad blocking, which seems to work well.

Also, I resolved an issue with the network coming up after a restart or connecting the cable. There was a 20-30 second delay where Windows would be "identifying" the network, on multiple systems, but not on wireless. The network topology hadn't changed since the apartment, so I was confused why there was suddenly an issue. I realized there was one difference: the Ubiquiti POE switch. I had it hooked up at the apartment, but I never restarted my computer and didn't use the HTPC, so I never noticed the problem. The delay is caused by the switch's STP (Spanning Tree Protocol) feature, which will delay passing traffic on a new connection for 20-30 seconds to make sure there is no loop. On the Powerconnect switch, I had to enable the "Edge Port" option ("FastPort" on newer models) to prevent this check on a new connection.

Finally, I got a UPS unit for the modem and put it on a new shelf.

IMG_20181124_142049.jpg
 
Been awhile since an update here. I've got a ton done on the house, but only a little bit related to servers or the network. I installed pfBlockerNG on the pfsense for network-wide ad blocking, which seems to work well.

Also, I resolved an issue with the network coming up after a restart or connecting the cable. There was a 20-30 second delay where Windows would be "identifying" the network, on multiple systems, but not on wireless. The network topology hadn't changed since the apartment, so I was confused why there was suddenly an issue. I realized there was one difference: the Ubiquiti POE switch. I had it hooked up at the apartment, but I never restarted my computer and didn't use the HTPC, so I never noticed the problem. The delay is caused by the switch's STP (Spanning Tree Protocol) feature, which will delay passing traffic on a new connection for 20-30 seconds to make sure there is no loop. On the Powerconnect switch, I had to enable the "Edge Port" option ("FastPort" on newer models) to prevent this check on a new connection.

Finally, I got a UPS unit for the modem and put it on a new shelf.

View attachment 202377

about time we got an update! lookin good
 
Been a long time since I've updated this thread, and there is movement because there is lack of (fast disk movement) on the server. When I migrated away from ESX a couple years ago, I chose Hyper V since I have Server 2016 licenses. Instead of using hardware RAID, I decided to go with Storage Spaces, since it was getting favorable reviews. After an initial test, it seemed to work well, and I migrated the eight 3TB WD Reds over. Since switching, I only had one instance where a couple disks dropped off for no reason (nothing in logs), and enabling them brought the array back online. Performance was ok, but it wasn't great; around 60mb/sec writing but full network speed reading. Since I don't do writing that often, this wasn't a problem.

However, the speed of the array over the years has declined significantly. Where it would write all day at 60 mb/sec, it now averages ~22-25 mb/sec, which is terrible. If anything else is using the disk (cameras recording, etc), speed takes an even bigger hit to the point where Unifi is complaining about not being able to save camera video segments fast enough. Reading speeds are still ok, but I can tell it is starting to have problems when multiple things are accessing the share.

2019-10-10.png

People online don't seem to understand what makes the array performant, and give conflicting answers. I have enough time troubleshooting this issue that I'm going to just give up and go back to ZFS on CentOS.

Researching hypervisors again, Hyper V does allow PCI-e passthrough (finally), but it is only supported for Windows operating systems. Citrix bought XenServer, then killed it earlier this year and re-branded it under Citrix Hypervisor, which costs money. Vmware still has the free vSphere Hypervisor (previously ESX), which I'm certain will work for what I need. I'm going to start with Vmware, since it worked well for me in the past.

The only problem before switching is getting data into a temporary location before switching to ZFS. I'm up to 10 TB used, after cleaning up files. Luckily, Microsoft has added PowerShell commands to retire and migrate data off disks. Data is migrating off a disk now, and I'm hoping I don't run into trouble kicking it from the system. Once the first disk is empty, I'll remove it from the server, verify the disk is healthy, then fill it as much as I can. Repeat until the share is empty.

I'll also need to migrate the camera video storage to another location, so I don't cause an interruption in recordings.

2019-10-10 23_58_12-lucid - Remote Desktop Connection.png

The idea is to leave Lucid running Server 2016, and switch Awk to Vmware.

Wish me luck.



TLDR: Storage Spaces is slow, don't use it.
 
Luckily, Microsoft has had the foresight to allow disk removal, and it works. The first WD Red was emptied over night and the virtual disk was able to repair itself successfully. This morning I was able to remove the physical disk from the virtual disk, format it, mount it, and start to move data. Only 9 TB left to go.

Camera and TFS storage has been moved to a new partition on the local SSD array I created by shrinking the main volume. This should calm down the Unifi software, which was complaining constantly about the array being slow.
 
All data is migrated off the array. I had Centos 8 installed a few days ago, and ready to go. Shut down both servers and swapped the RAID card over to awk, fired everything up, and the disks aren't detected in the virtual machine. After a bit of research, it turns out the mpt2sas driver is deprecated with Centos/RedHat 8 and mpt3sas no longer has support for LSI2008 cards. I installed the mptsas driver, but it refused to identify the disks. Looks like I'm using Centos 7.

I had a supervisor when I switched the card to awk.
IMG_20191013_120755.jpg
 
Last edited:
Most of the data is copied back over and all WD Reds are in the ZFS array. I have two local hard drives copying to the array, I'm reading files from the same array, and ZFS is handling it gracefully compared to Windows. Besides a virtual machine absolutely refusing to mount a samba share, the migration has been fairly uneventful.

2019-10-14 20_50_30-root@vm-fileserver_~.png

I added a ZIL (4gb) and a SLOG (cache; Samsung 840 Evo 120gb), which should help performance. My second Samsung Evo is dead. Need to see if I can RMA it.
 
Long time without an update. Everything has been working great. However, the disks are 10 years old at this point, and the R710 servers are even older.

I've always been nervous to update pfSense, because of the trouble I had installing the OS on the system. It really hasn't been updated since I installed it, which isn't a great idea. I've always wanted a Unifi Dream Machine, but couldn't justify the costs.

Recently saw someone selling theirs, and decided to buy it.

PXL_20230629_172912684.jpg
It is the bottom white box in the picture.

Installing the UDM was quick and easy, with minimal setup. While it doesn't have anywhere near the options pfSense does, I didn't lose anything but the network-wide adblocking. It does have one built in, but it isn't quite as good and you don't have control over it.

I can manage the router from anywhere, even when I'm not at home. It also manages the cameras for me, saving to a HDD I installed. It also allows me to view my cameras from anywhere. This was a feature which was removed years ago from the server version of the Unifi software.

2023-06-29 12_34_55-UniFi Network - Euphoria.png

The only things left running on Lucid are a print server and the SQL server for Kodi. I can easily migrate those to virtual machines on AWK, then shut down Lucid and probably part it out.

I'll also start to consider replacing AWK. Each R710 server is about 150w idle, and with how old the hardware is, I can't justify keeping it around for too much longer. I have no idea if I want to go with a newer rackmount server or go an alternate route.
 
Back