• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Rebuild finished and everything seems to be working. I'm guessing that those pins that were touching caused data errors on the disks. I'm getting roughly 186 MB/sec writing to the (four) disks in RAID 10. I'm planning on making this one a database server, and maybe something else.
 
Bought more equipment and doubled my total data storage.

2x SE7520BB2 mobo
4x 1.66GHz Core Duo processors
4x aluminum 2U passive heatsink
4x 2Gb (8Gb total) DDR2-800
4x 1Gb (4Gb total) DDR2-800
4x Hitachi 5K3000 drives

I'm going to throw each of those systems in this case and grab a pair of power supplies for it.

http://www.newegg.com/Product/Product.aspx?Item=N82E16811219023

I intend to use them as virtual machine servers. The processors are low power Sossaman (Yonah), just before Core2 came out. They certainly aren't the fastest things out there, but it allows me to have a plethora of internal network computers to do what I want.
 
Let me know if you need more RAM, I may have some here at work I could throw your way.
I appreciate the offer, but I doubt you will have the sticks I need. All the servers are running ECC (and probably registered) memory. In addition, they'd have to be the same size/speed/voltage sticks.

Making huge progress on the database server. It was my goal to run some type of database without disabling the security programs I normally do (SELinux and IPTables). I have successfully done both. I used XBMC with an advanced configuration to use a remote mysql database. I had to configure mysql and iptables. I didn't (strangely enough) have to configure SELinux for it to work. Here are the databases it created on the server:

Code:
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| xbmc_video         |
+--------------------+
3 rows in set (0.00 sec)

mysql> use xbmc_video
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+------------------------+
| Tables_in_xbmc_video   |
+------------------------+
| actorlinkepisode       |
| actorlinkmovie         |
| actorlinktvshow        |
| actors                 |
| artistlinkmusicvideo   |
| bookmark               |
| country                |
| countrylinkmovie       |
| directorlinkepisode    |
| directorlinkmovie      |
| directorlinkmusicvideo |
| directorlinktvshow     |
| episode                |
| episodeview            |
| files                  |
| genre                  |
| genrelinkmovie         |
| genrelinkmusicvideo    |
| genrelinktvshow        |
| movie                  |
| movielinktvshow        |
| movieview              |
| musicvideo             |
| musicvideoview         |
| path                   |
| setlinkmovie           |
| sets                   |
| settings               |
| stacktimes             |
| streamdetails          |
| studio                 |
| studiolinkmovie        |
| studiolinkmusicvideo   |
| studiolinktvshow       |
| tvshow                 |
| tvshowlinkepisode      |
| tvshowlinkpath         |
| version                |
| writerlinkepisode      |
| writerlinkmovie        |
+------------------------+
40 rows in set (0.00 sec)
While adding information to the database, it was using 7% of one core (on a dual socket, single core [hyperthreaded] server - 4 virtual cores). I'm shocked at how everything fell into place and just worked. I'm thinking of moving the LDAP database to this system as well, since it seems to easily cope with SQL. This would allow me to finally get my RADIUS server up and running.
 
Starting with the boards first. Just got these pulled out for preparation and testing.

arrived_naked_1.JPG


arrived_naked_2.JPG


arrived_naked_3.JPG


arrived_naked_4.JPG


Such sexy systems. :drool:
 
Got the heatsinks mounted and the drives in the server; which are currently formatting. I've decided to run these in a RAID 10 so I can move my VMs to it.

hitachi_2tb_1.JPG


hitachi_2tb_2.JPG


hitachi_2tb_3.png


hitachi_2tb_4.png
 
Running LinX on the server board to make sure it is stable. I have no doubts it will be fine, but I want to make sure and see how much processing power this thing has. On the other side, I have three "dd" programs writing to the new drives that are in RAID 10 and it is taking it like a champ.

hitachi_2tb_5.png


That is 80, 54.1 and 50.9 MB/sec at the same time. Considering the drives are thrashing to write all three, 185 MB/sec isn't bad.
 
Last edited:
The rails arrived for the servers today. I got the 2650's and the 2550's installed at the top.

dell_2650_racked.JPG


Still need to order cases, rails, power supplies and hard drives for the dual socket Yonah systems. In addition to that, I need to sort out my networking and power runs. I'm also trying to find something else to use that 7u Sencore system for. The hardware is older than I want, so I don't think I'll ever use the server.
 
Looks like I will be getting a Dell PowerConnect 5224. Just need to talk with the seller.

I also decided to not be lazy tonight and setup Conky. Only took me a few months to decide to install it.

conky_08-01-2011.png
 
Starting with the boards first. Just got these pulled out for preparation and testing.

arrived_naked_1.JPG


arrived_naked_2.JPG


arrived_naked_3.JPG


arrived_naked_4.JPG


Such sexy systems. :drool:

Them boards look very similar to what my fileserver is running:

Intel SE7520BD2

SE7520BD2.jpg

This board has been rock solid since the day I got it :thup:

Only complaints are that it puts out a fair amount of heat when underload, so it probably puts the electric bill up abit too :-/
 
Finally got around to powering up the database server since I've added it to the rack and it does not want to work. I was SSH'd into the box when I heard the bell notification. I had forgot I had it open and was checking my system to make sure something didn't break. I then remembered I had the terminal open and there was a disk read error splattered across htop. I went downstairs and hooked up the LCD to it and it was spewing disk read errors with two of the disks flashing yellow. I shuffled around the disks to see if it was those specific disks or the positions on the breakout board. Still waiting on the rebuild.
 
Finally got around to powering up the database server since I've added it to the rack and it does not want to work. I was SSH'd into the box when I heard the bell notification. I had forgot I had it open and was checking my system to make sure something didn't break. I then remembered I had the terminal open and there was a disk read error splattered across htop. I went downstairs and hooked up the LCD to it and it was spewing disk read errors with two of the disks flashing yellow. I shuffled around the disks to see if it was those specific disks or the positions on the breakout board. Still waiting on the rebuild.


I got three 15k SCSI disks with my server. Two were dead :( Hope you have better luck :thup:
 
Even if the board is dead, I have a spare SCSI RAID controller that I might be able to put in. It looks like it supports external cards. So, I should be able to get this working, either way.
 
Thanks for the offer, but CW sent me what I think is a Perc 3/i.
 
I've been trying to figure out how I want to do these new Yonah virtual machine servers. The hard drives, unsurprisingly, are going to be the most difficult decision out of all of them. My main concerns are speed, ability to easily backup the virtual machines (including configuration files) and good use of the hard drive space.

My first idea was going to get 8 drives and do two RAID 10 arrays. This is going to be costly unless I go with outdated (and our of warranty!) drives. For speed, this is probably the best I can go. For backing up, I could rsync/scp to the file server; easy enough. The problem that I see with this, excluding cost, is that it isn't efficient use of the hard drive space. Even if I get ancient 250 GB hard drives, that leaves me with a massive 500 GB RAID 10 array. My current virtual machines take up a whopping 146 GB, including the over-sized 64 GB drive for the Windows 7 machine. I could easily get this below 75 GB for my current machines and then I would distribute this between three servers. That leaves me with 1 TB of space that I can't easily use. I don't like it, for that exact reason.

My more current --and I think more clever idea-- is to create a SAN using SCSI targets. I don't mean the full fiber insanely expensive stuff (I wish!), but instead using cheaper technology; a LAN cable. This would give me roughly 100 MB/sec of throughput for a virtual machine, which I see as more than enough. If I wanted to change the setup and go with something faster since I have 2 gigabit fiber cards laying around, I could easily upgrade if I can get a switch. Backups are also as simple with different drives. This also solves the problem of "misusing" hard drive space as I can slap in older drives for the OS for the virtual machines servers. I could assign 100 GB to each server and expand later (I hope) if needed. If I go this route, I need to re-think my current setup and change how the drives are used. I also need to research how these things work. If they are treated like a local disk (meaning you format and mount them like local disks), I should be able to easily do this.

If I go with my current idea, I could change the 2 TB Hitachi drives, which are currently in a RAID 10 array, to RAID 5/6 and use it how I've been using my current array. This would allow me to use my 1 TB drives for a RAID 10 array. 6 drives for the live RAID 10 array and a hot spare. After I partition off the SCSI targets, the rest could be used for storage or backups. This would allow me to easily expand my storage array by simply adding more 2 TB drives. On the down side, this is going to take a serious amount of my time in research and testing. Not to mention, I'm going to want to start the server's OS install over. I got a lot of crap on it that I don't need, and it would take far longer to remove it than to start over.

Finally, I think I've decided on VirtualBox for my hypervisor. I haven't mentioned much about me wanting to switch, but I've been hitting constant issues with libvirtd. I had issues with the Fedora 14 install hanging and found that it came down to how much memory the virtual machine had. If it was under 1024, the script simply crashed and nothing happened. Simple, I thought, as I'll just increase the memory to----. Oh, that is interesting, I got an error changing the memory of the virtual machine. I looked up the message and it is a known issue, great. The only way to solve it is to either delete the virtual machine and create a new one or to change the configuration of the virtual machine and bounce libvirtd. I've been lazy with this and have done neither. So, I decided to start looking around for a new hypervisor. I was thinking of ESXi or XenServer for the Yonah servers, but I have issues (ESXi is lacking in feature for the free version) with both. So, that leaves me with a non-bare metal hypervisor. VMWare Server is a joke on Linux with newer operating systems (web front crashes all the time). Someone mentioned VirtualBox and I didn't think much of it until I started looking around. The features this has for how much it costs (nothing) is incredible. For example, you can straight up pass PCI devices to a virtual machine. This should allow you pass a video card and sound card through and run a HTPC, in a complete virtual environment. I tried this out on one of my Yonah servers, but it doesn't have the proper chipset to do this (has to be ICH9 or newer). I'd love to try this out.

-------

So, I need to:

1) Research more on SANs, SCSI targets and how to use them.
2) Test out SCSI targets on the current server or on the virtual machine servers to see how they work.
3) Get any information out of the virtual machines before switching hypervisors.
4) Change the server's storage drives, if that needs changing.
5) Format the server and start over.
6) Configure and setup VirtualBox on all three servers.
7) Buy more 2 TB drives.
 
Back