• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
What software do you use to monitor the UPS(s)?
I'll be setting that up later. I'm currently busy, so I can't tinker with them too much and certainly can't take down the internet to install them.
 
I was at work and needed the internet for the VPN and applications that go across it. I'm loading up the UPS units now to make sure they work. All virtual machines, except for the scanner feed, are currently shut down.
 
The new Dell UPS units have been migrated to. I tested them out of the rack to make sure they take a load and recharge the batteries. Charging rate below 90% battery is 66w each, and above 90% is 33w each. The fans run on these units all the time, unlike "normal" desktop versions. These are supposedly refurbished units, but they look brand new. No scratches or fingerprints on them that indicate it has been installed in a rack.

Once I got all the equipment installed, I did a "full system test" by flipping the breaker to simulate a power outage. Both held up as expected. With all equipment running and ~90% battery, I have around 35 minutes on battery for one, 20 for the other. The bottom unit is slightly more loaded than the top as it is running the firewall and networking equipment.

Networking and power cables in the back are not final, but I did reroute anything that I could while the systems were down. Had to swap out the LAN-side cable on the firewall as the connection was pressure sensitive; dropping in and out repeatedly.

dell_ups_installed_1.JPG
Yes, those displays are much brighter than the Poweredge servers above them. Brighter = more power. With that formula, you want the UPS to be brighter (more power) and everything else to be dimmer (less power). It makes it run longer.

dell_ups_installed_2.JPG

dell_ups_installed_3.JPG

dell_ups_installed_4.JPG
 
The bezel arrived for the other r710 along with a set of cable management arms.

DSC_0394.JPG
 
You'll have to unbolt them from my cold, metal rack, comrade.
 
The 1u cable manager arrived yesterday and I got it installed. I moved the fiber switch down and placed the cable manager between the two switches. Now I need to work on finding cables of the proper length and re-doing the network cables since they won't be long enough for the cable management arms.

You can see a fiber cable being run and it just barely reaches the top R710 server's right most fiber port. I need to get longer ones to make actual connections.

DSC_0418.JPG
 
I flipped the fans in the fiber switch so that they blow out the back. Otherwise the switches would recycle the air between them, which probably won't be an issue, but I'd rather not risk it. This switch is well designed from the standpoint of airflow. There is a spacer behind the fans (unlike the Powerconnect) so they aren't smacked up against finger guards.

Instead of flipping the fans in the fiber switch, I could have done that with the Powerconnect, but it is much louder.

DSC_0423.JPG
 
After much frustration last night, I found that you should always check your assumptions before spending hours on a problem that doesn't actually exist.

I have new disks and a new RAID controller on order for AWK so that I can finally get away from the 8708EM2 and the Velociraptors that are currently in it. While they work fine, there is certainly a large performance difference between AWK and Lucid when it comes to disk usage. Visbits was also kind enough to get me a good deal on the parts, as always.

To prepare for the new disks, I needed to move all the virtual machines from AWK over to Lucid. The problem was that when I installed Xen Cloud Platform (XCP) on AWK, I didn't enable thin provisioning**. After creating a few virtual machines, I found that I was running out of space fairly quickly and decided that I should have enabled it. So when I installed Lucid, I enabled the feature and decided that if AWK gets reinstalled, I'll switch the option.

To migrate the virtual machines over to Lucid, I was going to create a new virtual machine on each server as a Hardware assisted Virtual Machine (HVM), boot Clonezilla on AWK, move over each virtual machine's disk on Clonezilla, create an image of the disk, and store it on the file server. From there, I can recreate the virtual machine on Lucid and restore the disk image. This worked well up until Clonezilla let me select the "xvd*" device of the attached virtual disk, where it yelled at me for the disk not being a "sd*" or "hd*" device! Who cares? Why is this even a check?!

Doing some quick searching led me to a bug report of others hitting the same issue and developers saying they fixed this in >2.1. I downloaded 2.1 at a blazing 80 kb/sec, thanks to Source Forge, to find that the ISO doesn't even function in XCP. It gets past the bootloader and hard locks; spiking a CPU and never finishing. Others reported the same issue and nothing I tried fixed the issue. I tried pulling down another version (again at 80 kb/sec, thanks Source Forge), and it did the same thing.

At this point, I'm completely stuck. I can't migrate the disks because one server is thick provisioned and the other is thin provisioned. I can't migrated the disks themselves because Clonezilla is broken and there is no obvious solution. The only option I have is manually reinstalling all the virtual machines on Lucid and resetting everything back up, which is a multi-week operation, at best. I was not looking forward to it. While looking for other information, I happened to stumble across this article.

It is even possible to convert a thick-provisioned disk into a thin-provisioned disk by migrating it to a thin-provisioning storage repository.
Damn it.



**This feature allows the hypervisor to create virtual disks of any size, but only allocate disk space when the virtual machine actually needs them. For example, if you create a new virtual machine with a 32 GB disk, the actual size upon creation will be a couple KB until you install the operating system. Whereas if you has thin provisioning disabled, it would allocate the entire 32 GB disk when the virtual machine is created, even though it isn't being used. Leaving it disabled will increase performance because the virtual disk is not fragmented on the hard drive, but it takes up a lot more space.
 
Welcome as always.

I have lots of server stuff available from the datacenter as we replace it if anyone wants something have corey contact me about it.
 
I just found that XenServer went totally open source and are following the RedHat model of payments (pay for support).

I've been running Xen Cloud Platform (XCP) as my hypervisor on the R710's, which was a non-branded version of XenServer (like CentOS is to RedHat). The downside is that it lagged behind in updates and it was missing features that the full version of XenServer had.

There is a straight upgrade path from XCP 1.6 to XenServer 6.2, so I can directly upgrade my servers.

Licensing for XenServer 6.2.0 has changed. Functionality previously available within premium versions of XenServer is now available free of charge. In order to receive Citrix support, XenServer hosts must be licensed.

...

On installation, XenServer starts in an unlicensed state. When it is unlicensed, there are no restrictions on its functionality or features, with the exception of the application of hotfixes using XenCenter.
 
The disks and controller arrived for the big R710 server. Dell H700 controller, to replace the LSI 8708EM2 it is currently using, and 3 TB worth of 10k RPM SAS disks (10x 300 GB).

I've moved all the virtual machines over to Lucid to allow installation of the new hardware, along with XenServer.

DSC_0446.JPG

DSC_0448.JPG
 
nice little getup you got thiddy :)


we just got a new bulk load of junk at work and i think one of them is a R710 for AppAssure from dell. 33 TB of formatted storage ;)


that and we bought a M1000E blade enclosure, 3 blades with 2 8 core xeons 64 gigs of ram, and a equalogic blade SAN 17 raw 10k sas.


i'm loving it :) ESX 5.5 and Vcenter so we're playing with the big boy toy's :)
 
that and we bought a M1000E blade enclosure, 3 blades with 2 8 core xeons 64 gigs of ram, and a equalogic blade SAN 17 raw 10k sas.


i'm loving it :) ESX 5.5 and Vcenter so we're playing with the big boy toy's :)
Those look like servers that would be fun to play with. You'd just have to dedicate an entire breaker panel to run one!

I got a bunch of stuff done with the servers today. While I was running DBAN on the Velociraptor drives, I rewired the rack, moved some stuff around, and got it looking all pretty-like. Once the drive wipe was done, I removed the LSI 8708EM2 card and the Velociraptor drives and installed the H700 along with the 8x 300 GB 10k RPM SAS drives.

From there, I built the array and installed XenServer 6.2.0. Once that was done, I upgraded Lucid from Xen Cloud Platform to XenServer.

This is the wiring I ended up with:

server_arm_rewire_1.JPG

server_arm_rewire_2.JPG

server_arm_rewire_3.JPG

The cable management arms allow me to slide the server out without disconnecting any of the cables.
 
Ha, wasn't expecting this to happen anytime soon. I ran a server out of memory. I just need to push more virtual machines back over to Awk, but still.

2013-12-13 10_45_40-XenCenter.png
 
lol Thid you're my most favorite person on here. Because when people tell me my one server with 48GB of ram and 8 VM's is too much... I just show them this thread. So thank you for making me look more reasonable!
 
lol Thid you're my most favorite person on here. Because when people tell me my one server with 48GB of ram and 8 VM's is too much... I just show them this thread. So thank you for making me look more reasonable!
Glad I could help. :D
 
makes my 96mb of ram in my server seem alot more reasonable.
 
Back