• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
downstairs_done_1.JPG


downstairs_done_2.JPG


Then I got everything wired up and running :D

hp_rack_running_1.JPG


hp_rack_running_2.JPG


hp_rack_running_3.JPG


hp_rack_running_4.JPG


hp_rack_running_5.JPG


hp_rack_running_6.JPG


hp_rack_running_7.JPG
 
So what is everything going to be used for ?
Going from top to bottom.

Switch (in use)
Compaq DL360 (Firewall/router)
Compaq DL360 (Spare)
Cisco router (Not in use)
Norco 470 (Empty, future HTPC)
Norco 4020 (File server)
External drive/UPS
Dell Poweredge 2550 (Test server)
Unbranded P3 server (Untested)
Compaq R3000 (Future UPS)

That is only 3 servers offline, not too bad :p
 
Not sure at the moment. I may make/buy more computers for it. I've always wanted a cluster. Run a bunch of 2/3u servers or something.
 
Looks good I want to get a rack now, Jealous of your house :)

If you don't mind me asking how much did it cost? I am paying about 1600 a month for a house that could fit in your basement :)
 
We didn't actually buy the house. It is a town home, which is a bunch of "houses" that are put side-by-side. The only difference is I don't have a lawn and they take care of it. We are paying ~680/month for rent. That is only 40 more than our old apartment, so I'm definitely not complaining.
 
You might want to pick up a spare PSU for those 360's. It got to the point when I was a DC admin that I would remove the good fans and replace failed ones on those PSU's. If a fan fails the server won't boot period. other than that those things are bulletproof.
 
You might want to pick up a spare PSU for those 360's. It got to the point when I was a DC admin that I would remove the good fans and replace failed ones on those PSU's. If a fan fails the server won't boot period. other than that those things are bulletproof.
I got 6 spares at the moment :p. The firewall has been running 24/7 since I picked it up and has never had an issue. I'm running the drives in a RAID1 and have a total of 3 spares (second server + actual spare). Along with that, the firewall emails me the configuration file weekly. So even if the drives exploded into a million pieces and the server started on fire, I can just install Astaro and point it to that file.
 
I got 6 spares at the moment :p. The firewall has been running 24/7 since I picked it up and has never had an issue. I'm running the drives in a RAID1 and have a total of 3 spares (second server + actual spare). Along with that, the firewall emails me the configuration file weekly. So even if the drives exploded into a million pieces and the server started on fire, I can just install Astaro and point it to that file.

that's the nice thing about older hardware, it's cheap and plentiful. Also, seeing your setup makes me wish I had a basement.
 
Got some more work done on the file server last night. I had attempted to install KVM/Qemu previously, and while it works, it was causing huge headaches. After I installed that VM software, I botched the bridged connection (which you have to do from the server) and it caused the server to not respond quickly. So, when I went to SSH into the box, it would ask for the username and sit there for 30 seconds before asking for the password. Normally, that isn't an issue, except I had it set to a 30 second timeout. You had ~2 seconds to type your password before the server disconnected you. If you raised or lowered the SSHD time limit (the easy fix), the length followed; it was a bit silly. So, I had to write a script to fix the network every time the server started or I had to run it myself. Again, not an issue except it took an extra 3 minutes to boot the server. I ignored it for a bit, but I had a bunch of free time dropped in my lap, so I started over.

With the release of CentOS 5.5 on May 15th, I planned to format it this weekend. So, I backed up the /etc and /home folders to the RAID array and shut down the server. I hooked one of my massive 25ft VGA cables to my monitor since I'm close enough to do this and found out the video card I have is only DVI-A out instead of DVI-D. The problems? The VGA out on the video card doesn't work and there are 4 extra pins on the DVI->VGA converter; so it won't fit. I continued to bash my face on the cable for an hour or so before abandoning this. I grabbed one of my expensive/nice DVI cables and used that instead, sacrificing the use of my monitor in the process. I hate single screens, but I can see my server!

So, in a slightly better mood, pop in my CentOS NetInstall disc and notice that my RAID card is telling me that Write Back is disabled because the battery is missing/charging/on fire. I jumped in the configuration window and started checking it out. No matter what cable or battery I use, it always reports the batter is missing/charging/in space/on fire. I didn't want to tinker with it too much since my mood was tanking quickly. At least I know what was causing my "slower" write speeds; I have a project to work on in the future. To my great satisfaction, the install went smoothly and quickly. I added the RPMForge repository to get some more packages that I wanted (htop, etc).

I updated the server and got a bunch of my programs installed again. Since I had my /etc backed up on the raid array, it was easy to grab configurations and apply them to the newer versions of the program. Got users created, samba up and running, rsync functioning and a few others. The main reason I went to CentOS was because of the VM's (and because it was shiny/new). I downloaded VMWare, installed it and it works; great! Created a virtual machine and grrrrrraaaahasssssshhhh*crash*. VMWare crashes, seriously? Was looking around and it seems a new version of Glibc that was released many months ago, that caused VMWare to crash, still has not been fixed by the VMWare staff. Good job guys, I'm glad you are able to keep your product up to date and functioning flawlessly. Luckily, there is a workaround to grab the Glibc packages from CentOS 5.3 (yes, five point friggin THREE) and modify the VMWare service to fix it. After a reboot and configuration run, it works. I like VMWare and the setup was a lot easier to get running (including the work around) compared to KVM/Qemu; but come on guys, it would have been so much easier if you fixed your program the first time and kept it updated. </rant>

Beyond that, the server is running flawlessly and working a heck of a lot better than it was 24 hours ago. I was doing some testing on Saturday (before the reinstall) to see what throughput I'm able to achieve to and from the server. I did some quick tests and changed the MTU size to 9000 then ran some more. I got a nice bump in speeds, write speed is still very low because of the "missing battery" on the RAID controller.

No mods
-------
Read - 66-72mb/sec
Write - 52-59mb/sec

MTU=9000
-------
Read - 78-82mb/sec
Write - 60-63mb/sec

I'd really like to get that battery working to see what speeds I can get with the RAID cards able to go full out.

I also have a job for the Dell Poweredge server. A friend of mine makes games for Byond in his spare time and wanted to test out how everything works. I offered to give full access to one of my servers since it isn't in use. I got that machine up and running with a spare copy of 2003 (yes, it actually is legit) so he can use it. So far, beyond the firewall giving me hell, everything seems to be functioning fine.
 
I'm not sure I follow. You think the battery or the card is dead? The card has been working flawlessly since I got my Hitachi drives, so that isn't having issues. I tried 3 different batteries which are supposed to be "new". I tried all combinations of the pins, but I can't get a voltage reading from any of them; which doesn't make sense.
 
I'm not sure I follow. You think the battery or the card is dead? The card has been working flawlessly since I got my Hitachi drives, so that isn't having issues. I tried 3 different batteries which are supposed to be "new". I tried all combinations of the pins, but I can't get a voltage reading from any of them; which doesn't make sense.

So you believe the 3 batteries are dead and not the one perc 5/i? Troubleshooting alone says the card is not able to use batteries... and I don't know many devices that can't lose x and still not do y... think of it like your transmission losing reverse. IN this case, the perc 5/i doesn't seem to accept batteries anymore. :(

Unless you really do beleive all 3 batteries are dead.
 
Well, the only way I could test the batteries was to check the voltage. I couldn't find the pin out diagram for them, so I had to guess. I grounded the negative for the multimeter and checked each pin; no voltages at all. I don't see how a battery would be reporting no voltage unless "I'm doing it wrong", but there are no other pins to check.

I guess I could swap it to the other RAID controller, but that might be a few days before I get down to testing it.
 
How long did you leave the battery hooked up. I know when I had the perc 5/e the battery needed to charge for a few.
The first one? Months. The others? At least 30 minutes before powering down. I'd be really disappointed if it was the card; that is the hardest thing for me to replace.
 
Back