• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Check out my review of the Perc 5/i on the front page of Overclockers.com (link). I wrote a review on it and it is a very nice card for the price. I might be able to hook up you if you have classifieds access...
 
Cheers for the advice Thid :thup: I do have classifieds access, just not much $$$ right now. I pulled the trigger on a 4-port hardware RAID card off ebay about a week or so ago. I'm glad the seller let cancell the buy after I found out it had a 2TB array limit
 
Well, I would have had the Dell Powerconnect up and running, but the one they sent me had a capacitor jumping around freely inside with a slew of blown caps. I promptly sent that one back. :-/

Still searching for another one.
 
Been really lazy as of late. But, I have been reading some BASH guides to try and make my script(s) better. Here is another large revision of my backup script. Instead of a bunch of "if" statements to see if the drives are mounted, you can easily add a drive to the list and the script will compensate for that. For example, in the code below, you see this:

MOUNT[0]=/mnt/hitachi
MOUNT[1]=/mnt/green

If you were to simply add a path below that, it will check as many as you add. This would work:

MOUNT[0]=/mnt/hitachi
MOUNT[1]=/mnt/green
MOUNT[2]=/mnt/ocforums
MOUNT[3]=/mnt/something

So on and so forth. This greatly lowers the edit time needed to check mount points. I simply use an array to work with and modify the variables. Of course, this script is constantly being updated with new features and to make the script safer, more stable, faster and easier to read/modify.

Code:
#!/bin/bash
#This script contains all of the rsync commands that are run daily on
#Thideras' file server.  The main use for this script is to backup to
#alternate locations, such as an external drive or remote location.

LOG=/mnt/hitachi/logs/rsync/rsync-$(date +%m-%d-%Y).log

#Give the date and time started
echo ----------LOG START---------- >>$LOG
echo This log started on `date +%H`:`date +%M`:`date +%S` >>$LOG
echo ----------LOG START---------- >>$LOG
echo >>$LOG

#Set the current time
STARTHOUR=10#`date +%H`
STARTMIN=10#`date +%M`
STARTSEC=10#`date +%S`

#============================================================================#
#Check drive mounts
#============================================================================#
#Test to see if the drive is mounted and available
#To add a drive, simply create a new line with an incremented variable
#If a drive is not mounted, exit for data safety reasons

MOUNT[0]=/mnt/hitachi
MOUNT[1]=/mnt/green

#Get the number of variables in the array
n=${#MOUNT
[*]}

#Loop through each drive mapping
for ((i=0;i<n;i++)); do
        if [ `mount | grep -c "${MOUNT[i]}"` -eq "1" ]; then
                echo "${MOUNT[i]} is mounted." >>$LOG
        else
                echo "WARNING: ${MOUNT[i]} is not mapped." >>$LOG
                echo "If this path is incorrect, please correct it." >>$LOG
                echo "Script is stopping to prevent damage."  >>$LOG
                echo >>$LOG
                exit 2
        fi
done

echo "All drives are mounted correctly." >>$LOG
echo >>$LOG

#============================================================================#
#End drive check
#============================================================================#






#============================================================================#
#This section is the daily rsync section
#============================================================================#

#Set the locations
COREY=/mnt/hitachi/rsync/corey/
COREY_COPYTO=/mnt/green/rsync_copy/corey

#Copy the files
rsync -av --delete $COREY $COREY_COPYTO >>$LOG

#============================================================================#
#End daily rsync section
#============================================================================#






#============================================================================#
#This is the archive function, runs on Monday only
#============================================================================#

echo >>$LOG
echo ----------------------------- >>$LOG
echo Weekly archive start >>$LOG
echo ----------------------------- >>$LOG
echo >>$LOG

if [ `date +%u` -eq "1" ]; then
        echo Starting weekly archive >>$LOG

        #Set the backup source locations
        COREY1="/mnt/hitachi/rsync/corey/World of Warcraft"

        #Set the backup destination locations
        COREY1_GZIP="/mnt/hitachi/backups/Corey_WoW/WoW-$(date +%m-%d-%Y).tar"
        COREY2_GZIP="/mnt/green/backups/corey/WoW-$(date +%m-%d-%Y).tar"

        #Tar the files
        tar -cf "$COREY1_GZIP" "$COREY1" >>$LOG

        #Copy the files to the green drive
        cp "$COREY1_GZIP" "$COREY2_GZIP" >>$LOG

        echo Weekly archive complete! >>$LOG
else
        echo Weekly archive is not set to run today >>$LOG
fi

echo >>$LOG
echo ----------------------------- >>$LOG
echo Weekly archive end >>$LOG
echo ----------------------------- >>$LOG


#============================================================================#
#End weekly archive
#============================================================================#


#Get the current time in minutes and seconds, assign to variables
ENDHOUR=10#`date +%H`
ENDMIN=10#`date +%M`
ENDSEC=10#`date +%S`

#Compute the difference between the start and end time
HOUR=$(($ENDHOUR - $STARTHOUR))
MIN=$(($ENDMIN - $STARTMIN))
SEC=$(($ENDSEC - $STARTSEC))

#If the time goes under 0, correct the time
if [ $SEC -lt "0" ]
   then
        SEC=$(($SEC + 60))
        MIN=$(($MIN - 1))
fi

if [ $MIN -lt "0" ]
   then
        MIN=$(($MIN + 60))
        HOUR=$(($HOUR - 1))
fi

if [ $HOUR -lt "0" ]
   then
        HOUR=$(($HOUR + 24))
fi

#Add the end date and time
echo >>$LOG
echo -----------LOG END----------- >>$LOG
echo This log ended on `date +%H`:`date +%M`:`date +%S` >>$LOG
echo Runtime: "$HOUR"h "$MIN"m "$SEC"s >>$LOG
echo -----------LOG END----------- >>$LOG

exit
Which outputs something like this when drives are mounted:

Code:
----------LOG START----------
This log started on 14:47:04
----------LOG START----------

/mnt/hitachi is mounted.
/mnt/green is mounted.
All drives are mounted correctly.

building file list ... done

sent 2985514 bytes  received 20 bytes  1194213.60 bytes/sec
total size is 178450622473  speedup is 59771.76

-----------------------------
Weekly archive start
-----------------------------

Weekly archive is not set to run today

-----------------------------
Weekly archive end
-----------------------------

-----------LOG END-----------
This log ended on 14:47:06
Runtime: 0h 0m 2s
-----------LOG END-----------
And outputs this when a drive is not mounted:

Code:
----------LOG START----------
This log started on 14:59:27
----------LOG START----------

/mnt/hitachi is mounted.
/mnt/green is mounted.
WARNING: /mnt/ohgod is not mapped.
If this path is incorrect, please correct it.
Script is stopping to prevent damage.
 
Last edited:
Nope, hasn't been my primary project yet. Biggest issue with it is designing the hard drive retention system. :-/
 
I may be changing up the file server soon. I've been wanting to do a few things with it, but I've been considering using a slightly different route. This will probably happen all at once, since it will be far easier.

Drives - I still use my seven Hitachi drives in RAID 5 plus a WD as a hot spare. Since the WD would be bad to have in the array (TLER), I'd like to pull this out and either sell or use it in my desktop computer. I also have a SSD (30GB OCZ) that is in the server, but I may just sell that before I even use it. In addition to that, my 1TB drives are starting to get a little "old" and I would love to upgrade to 2-3TB drives, especially when I'm seeing deals (2TB for $60...). Whether I'll keep the old drives or get rid of them, I don't know. I could use them to test out my software RAID project that I've been neglecting. Once the drives are decided, I need to setup the VM's properly so they don't interfere with data storage. If I fire up 3 VM's at once, music/video will sometimes skip; this is unacceptable and poor planning on my part.

Dust - It has been a long time since I've cleaned it and the case is filthy. I need to tear the whole thing apart to get all the dust out. Might as well do it when I need to change the RAID array and the main operating system drive.

Controllers - I'd like to get rid of the Perc 5/i controllers, not because they are bad or slow, but because software RAID does what I want it to for cheaper. I would then need to find a method of running 20+ SATA drives since I only have 6 on board. Guess I need to research here.

CPU/Board - I'm still running a Phenom x4 2.6GHz. This is not the amazing Phenom II that overclocked well, this one is absolute rubbish. When I overclocked it a few years back, I got a whopping 200MHz out of it. I would like to go to a dual socket system so that it can handle running more VM's at the same time, but I'm not sure this will be feasible.
 
You will need port multipliers. AFAIK they do not work yet with consumer level intel chipsets, but jmicron and Marvell should both support them.

Depending on your mobo choice you might be able to add an easy 8 or 16 drives out of two ports.
 
Or SATA cards that use PCIe, which is the preferred method.
 
Or SATA cards that use PCIe, which is the preferred method.

If you are stuttering from Bandwidth limitations, then yes a true RAID card will be the only option. If you are stuttering from access limitations though, a proper FIS based port multiplier should be fine. (cheaper solutions used Command Based Queuing)

Maximum Bandwidth will still be that of a single SATA port so a big 4x or 8x pcie card would be overkill.

Also your stripe sizes will play an important part in stuttering as well. Smaller stripe sizes should offer better performance for OS type files, but require a lot more processing. Monitor that and see where you are at. (this is part of why HW raid is preferred over software... you are transferring cost of the rAID card to needing a more powerful system with higher operating costs)

CPU usage will effect networked drive performance. I had problems with java apps running on my server sucking up all available resources it caused stuttering on streamed files as well.
 
I'm not sure what stuttering we are talking about. The server is capable of saturating the network cable with a single hard drive and having its hands tied behind it back. Running a SATA card (not RAID) on the PCIe bus just gives me a ton of more bandwidth internally than a port multiplier.
 
If I fire up 3 VM's at once, music/video will sometimes skip; this is unacceptable and poor planning on my part.

This is what I was talking about.

Need to isolate whether it is a CPU or Bandwidth issue causing the skipping. Port multpliers will not help with bandwidth, but will increase storage. :rock:
 
Oh, sorry; totally forgot about that. It was more of an example than something I actually encounter. One that I do actually hit is moving GiB of information out to the server and having my music skip. That really irks me. But again, not very often do I see this. I need to isolate the VM storage files from my actual file storage.

It does that because the array is RAID 5 and doesn't do well with writes. Just too many operations at once.
 
Finally got the rack in the basement yesterday. I should be able to put the servers back in shortly.
 
It looks no where near as good as it did before. I have to leave the switch out of the rack and I don't have the UPS/Sencore/Dell in yet.
 
Got a bit of an update and some thoughts. No hardware changes as of yet, but I'll probably have some shortly (less than a few months).

Hardware:
I need to get an external drive and I'm waiting for one on Newegg to be in stock. Basically, I need one that is good, can house multiple drives and can do "JBOD" so that the server can handle the RAID. I don't trust those enclosures for anything more than physically holding a drive, keeping it powered, keeping it cool and passing data to and from the enclosure. In addition to that, the current 1tb Hitachi drives I'm using are starting to become quite dated and I question how long they will run. I received them back on April 23, 2009 and that is just a few days off their two year birthday! I've been itching to upgrade the drives, but I'm not sure on the configuration I want. If I'm going to be moving to my 30/45 drive server, then I might as well wait until the backplanes are installed and the case is designed. Otherwise, I need to decide if I want to stay with hardware RAID or take the cheap way out and do software RAID (or a combination thereof). I'd love to fill the server with 3tb drives, but I would be absolutely lost on how to fill them.

I've also considered upgrading the computing power and memory of the file server, but I'm not sure what I'd like to use. I'm waiting to see how Bulldozer pans out before I upgrade my main computer and may considering using the same/similar hardware in the server.

Finally, I'm still trying to figure out if I should use the OCZ 30gb SSD that I picked up many months ago. I'm on the fence. I'm not sure how reliable this will be compared to my 2.5" enterprise SAS drives. It will certainly be faster, which would make a huge difference.



Software:
This is the fun one and I've been planning for a few weeks on what I'd like to do. As I type this, I'm downloading the release of CentOS 5.6, which was released since the last time I looked. Sneaky! Here is a picture of the VM's I just created, to give a hint as to what I'm doing.

web_server_vmware.png


I blurred out the old, non-related ones. So, five new virtual machines Thideras, what are you up to? I'll start from the top and work my way down, since they are conveniently labeled in an order that makes sense! I've been pondering the usage of a RADIUS server as a test and possibly as a final setup. I think that will go over fairly quick compared to the others in the list.

The next two are exactly what they say they are; web servers. CentOS 5.6 will be the operating system of choice and I'm not sure exactly what I'll use as the software (Apache, Litespeed, etc); but I'm sure I'll try more than one. I may add more for a test. I'm also considering linking the folder structure and using a backend (SAN, simple file sharing, etc) so that they share and update files so it does not need to be updated manually.

The "Database Backend" entry should also be fairly obvious. I intend to keep the servers completely separate. This not only allows me to focus on security, but it allows me to have multiple web servers that can display the same content without having to sync databases. I may also add a backup database server to see how load balancing and fail over works.

Finally, the "Reverse Proxy" entry should give away what my final goal is. Not only do I want to setup a server, I want to be able to setup a cluster of web servers behind a single reverse proxy for load balancing and fail-over. I should be able to add any number of web servers and have it seem like I'm connecting to the same one. I know this is used on massive websites (Google, Microsoft, etc), so it intrigues me. With this virtual machine configured properly, I could fan out web servers between multiple virtual machines and hardware systems. So, instead of using a single slow Pentium 3 based computer for a web server, I can use a ton of them! *evil laugh*

-=-=-=-=-=-=-=-

But why do all this Thideras, it seems a bit overkill, don't you think?

Of course! The main reason for a single web server is simple. I'm starting to teach myself web-based languages (PHP, Javascript, MySQL, etc) and I need a testbed that I know I can trash without having to worry about my website being vulnerable or broken. Making constant backups is good, but running scripts that could damage everything is not good to have on a production level server. I just took this to a much higher level; adding more web servers, common database backends and reverse proxies.

The download for CentOS 5.6 is going to take another 33+ minutes, so I'm not sure how far I'll get today or even this weekend, for that matter.
 
Last edited:
Well, I think this is the weekend where I end up re-doing my server. I updated the file server and now vmware is completely destroyed. Since they haven't updated it since 2009 and refuse to put any sort of fix out for it, we are left hacking a fix together and hoping it doesn't explode. Unfortunately, it did and I don't think I can recover without downgrading the dependencies; something I refuse to do. I might as well reinstall the operating system and use something else.

Haven't decided if I want something like ESXi or KVM.
 
Back