• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I'm grabbing Xen and XCP (Xen Cloud Platform) to see how nice they play. Pulling Fortran out of the rack right now.
 
I had to look up the problems I was having because it was so long ago I didn't remember. It looks like I had issues with it locking up and where it wouldn't let me change the memory amount of a virtual machine once it was created.
 
Haha ok. So, XCF is stupid good. This isn't even funny. Even though I have to manage the servers from Windows with Citrix XenCenter, I'm absolutely blown away by the features this has. I can pool storage resources, which allows me to move virtual machines between them or start them on whichever server is least loaded, automatically. It has notifications built in, with logging and email. I get much more detailed performance (cpu, memory, networking, disk) per virtual machine. I can manage multiple servers at once and in one location and in one program. Snapshots, memory balooning, network pooling/fallback, and tons more.

As IMOG just put it, I'm nerdgasming so hard right now. I'm going to hammer this pretty hard and see if it will work for me. So far, nothing has gone wrong, and I'm expecting to hit some really dumb limitation that is going to prevent me from using it. However, I really want to use this.

Main overview of a server. You can see the listing of servers on the left.
xcf3.png


Virtual machine console view. I noticed a few things that I'm impressed by: num lock works, the mouse positions correctly, and the mouse scrolls. In VirtualBox, you have to install the tools before the mouse points correctly and if you are using PHPVirtualBox, the num pad and scrolling never works.
xcf1.png


Virtual machine statistics.
xcf2.png


I found this gem while creating a virtual machine. "Don't assign this VM a home server. "The VM will be started on any server with the necessary resources." :attn:
xcf4.png
 
Last edited:
Was trying to figure out how the dynamic memory works. I thought that you would set it to a "minimum" value and it would balloon up to the maximum if the guest requested it. Instead, it uses the full amount unless it needs to allocate memory for something else. For example, I created a 6750 MB RAM virtual machine on the server, which leaves 467 MB for a virtual machine assigned 1024 MB. Since the tools are installed, the program "takes up" the amount of memory that it is reducing by in the virtual machine. My 2003 install was reporting around 187 MB used at the desktop previously, but is now reporting 685 MB used.

This is pretty clever. If there is free memory in any of the virtual machines, you have it assigned to take it away, and the server needs it, it will reduce that vm's RAM amount to run another virtual machine.

"107% of total memory"
xcf5.png
 
I've considered moving to a baremetal hypervisor, yes. I think I'd have a few issues, though.

Disks passing through to the virtual machines is the biggest one. I wouldn't have an issue with that now because I'm using a real RAID controller, but I was considering ZFS and that completely removes that option (short of running it on a single volume, which I might as well run EXT4). This isn't a deal breaker, but it kinda sucks.

You can set up Raw Device Mapping for ZFS.

The next issue may be a non-issue. When I looked, I thought that the viewer portion of ESXi required a Windows client to manager it. I use Linux as my desktop operating system, so that is difficult to work around. I'd rather not run a virtual machine on my system to manage virtual machines on the server. That is unnecessarily complex. If there is a Linux variant, that solves this issue. I'll look into it after I type up this post.

You can partially manage it using VMWare Workstation for Linux, but the Windows ESXi client is more detailed than WS8. The best option is setting up vSphere and the web client. Alas, vSphere is expensive.

Another major issue is related to the RAID card. I haven't found how to manage the RAID card remotely. Right now, I can fire up software to do anything on the RAID controller. With a baremetal hypervisor though, I lose that layer and can't manage it short of being right at the terminal. This isn't a huge issue, but it is very convenient to not have to stand in front of the rack or restart the server to manage disks/arrays.

You can use LSI's MSM tool on your client, installing a VIB for your card on ESXi.

Licensing is my last concern. I want to stick with a Hypervisor that is going to be around for awhile. While I don't mind reinstalling my virtual machines, it really is a pain in the *** to make sure everything is setup right.

Regardless, I'm certainly open to suggestions and I'm willing to test it out on one of the IBM servers. With as much data as I have, I can't afford to get half way through a conversion and go "oh crap this doesn't work".

EDIT: I have access to Hyper-V on MSDN. Might check that out for fun, if nothing else. Double EDIT: Maybe not, this is really limited.

EDIT: LOL, didn't read this last page. It seems Xen is nice! Might check that out, as my server is right now pretty much down (two, yes, TWO HDDs died).

EDIT2: Something stupid just popped on my mind. ESXi and XCP only have full free Windows clients (ESXi VMs can be managed via WS9). Running a VM on your client to manage a VM is just ridiculous. But what about running a XP VM on your server, and then connecting to it via RDP to manage XCP/ESXi?

EDIT3: Sub'd to your blog.
 
Last edited:
EDIT2: Something stupid just popped on my mind. ESXi and XCP only have full free Windows clients (ESXi VMs can be managed via WS9). Running a VM on your client to manage a VM is just ridiculous. But what about running a XP VM on your server, and then connecting to it via RDP to manage XCP/ESXi?
I'm not sure what you mean unless you mean managing the server/server farm from its own virtual machine. If so, then yes, it is do-able, just not very convenient should something go wrong. Because if you have connection issues or that virtual machine goes down, you'd have to startup a local virtual machine or restart into Windows.

I'm still trying to figure out how I want to re-do the file server. Passing through a 14TB virtual disk just seems a little excessive. I could break it up into multiple disks ("backup", "media", etc) or even multiple servers, if I wanted.

I also want to figure out how difficult disaster recovery is. If I have a server go down hard, I want to know what kind of trouble I'm in. For example, if the storage containers are nigh-impossible to open, then I'll need to have backups of the virtual machines. But if I can simply mount the array, I'll be less worried about it. From what I can tell, the latter is true.
 
I'm not sure what you mean unless you mean managing the server/server farm from its own virtual machine. If so, then yes, it is do-able, just not very convenient should something go wrong. Because if you have connection issues or that virtual machine goes down, you'd have to startup a local virtual machine or restart into Windows.

I'm still trying to figure out how I want to re-do the file server. Passing through a 14TB virtual disk just seems a little excessive. I could break it up into multiple disks ("backup", "media", etc) or even multiple servers, if I wanted.

I also want to figure out how difficult disaster recovery is. If I have a server go down hard, I want to know what kind of trouble I'm in. For example, if the storage containers are nigh-impossible to open, then I'll need to have backups of the virtual machines. But if I can simply mount the array, I'll be less worried about it. From what I can tell, the latter is true.

Yeah, re-reading my own post made me realize that.
 
I took one of the PowerEdge 2650's out of the rack yesterday and disassembled it in preparation for it being parted out. Running these old of systems isn't worth the electricity cost and I can easily virtualize the tasks both were doing. I'm currently DBAN'ing both sets of disks before taking the last server out of the rack.

I'm also seriously considering upgrading my IBM x3650 M1 systems. The cost to upgrade the disks and RAM is more than I'd like, mainly due to the latter. I'd rather sell the servers, upgrade to a M2/M3 and have more processing power, RAM, and disk storage.
 
Tonight, I installed XCP to the second x3650 and added it to a pool. This allows me to transfer virtual machines between servers not only when a virtual machine is off, but when it is running. I wanted to test this feature out.

I used my 2003 test install, created a share, copied a BluRay file over, and shut down the virtual machine. Before adding complexity, I tried doing an offline migration of the virtual machine and it went through without issue. Then, I started it up, mapped the share on my desktop, and started watching the movie. About 5 minutes in, I initiated the live transfer. For only a brief moment (couldn't have been more than half a second), the movie stopped playing right as it switched which server it was running on, but it quickly recovered as if nothing happened. I'm impressed.

Doing a bit more playing around and research, I did find that there are "pools" and storage repositories. Pools allow you to share server resources, but not disk space. If a virtual machine needs to be migrated, you have to push the whole thing over. Since the files are stored locally, if a server goes down, it can't be started on another server. Storage repositories share storage between the servers, but cannot be the servers themselves. This is supposed to be used with a SAN/iSCSI backend where you have a huge array of disks and your "compute nodes" connect to them. With this kind of setup, it enables features such as high availability and being able to move the VM between servers much faster since there is no need to copy the disk over. If a server went down, it can simply be started on another machine, as long as the SR is available.

The problem I'll run into is that if I put all my servers on XCP, none can be iSCSI targets and those features won't be available. This limits my storage space to the local server. In the case of my x3650's, I only have 126.5 GB available each unless I add more disk. I could change the server configurations up a bit by putting a lower-powered motherboard/cpu in Ruby's current case and moving that server to a different chassis. This would allow me to keep my file sharing the way it is and convert my file server to a file server/iSCSI target combination. Alternatively, I could use one of the x3650's as the iSCSI target and hook it to the external port of the SAS expander. Hardwiring the power supply in the Norco 4020 to run without a motherboard would be trivial, but getting power to the SAS expander might not be.

I'm trying to determine what will be the best setup hardware wise. Not being able to share out local storage makes this a bit harder to configure with what I have. I do have my old QX9650 and Asus motherboard that I could use, but I'd rather have ECC RAM and "server grade" equipment for this, especially since it will be responsible for many terabytes of disk storage. I'll keep a watch on the classifieds section.


While I figure that out, here are some screenshots of my progress tonight.

Server migrated (while off) to Cobol from Fortran:
xcf6.png


Migration window while server is running. It gives various options, such as where to put the storage (if there were a shared storage, or multiple local disks), what network to transfer across, etc.
xcf7.png


Migrating while the server is on and streaming media. Not pictured: I'm watching a movie on the other screen. It paused for a total of 0.5 seconds.
xcf8.png


Live migration complete. Finished watching the movie without problem. The errors listed I'm still looking into. Basically, it was saying it couldn't find the snapshot that I had taken of the disk, so it couldn't transfer. Right above that, I deleted the snapshot and the transfer went through. I'm not alone in getting this error, but I'm guessing shared storage fixes this problem.
xcf9.png
 
Last edited:
Does XCP run already made VM's from ESXi, or will new VM's have to be created?

edit: what I got from your post = if you use XCP with the VM's stored on iSCSI they can move b/t servers with minimal impact...is that right?
 
Last edited:
It gives me the option of ovf, ova, gz, vhd, vmdk, and xva for import file types.

edit: what I got from your post = if you use XCP with the VM's stored on iSCSI they can move b/t servers with minimal impact...is that right?
Without the shared backend, you can still transfer the virtual machine between servers, even while running. The shared storage is to speed up the transfer and create a HA environment, where if a host goes down, the instance can be started on another host immediately.
 
I have called dibs on four 300 GB Velociraptors for virtual machines. My goal is to convert my file server to a SAN and move everything to VMs. I have much more planned, but I'm shopping around at the moment.
 
so how loud is your rack ive always wondered what thin servers sound like? err i guess you took out the thin ones i think you said. but how loud was it?
 
so how loud is your rack ive always wondered what thin servers sound like? err i guess you took out the thin ones i think you said. but how loud was it?
I don't really have anything to compare it to and nothing to measure it with.
 
sorry i should have explained better, sorry :beer: lol

well my server is almost annoying with ~2500 rpm 80mm fans and a 120mm fan @ ~ 1200 rpm if that works as a good comparison, idk. my main pc is ~1k rpm all around and its perfect. (120mm's)
 
Low speed fans in servers are generally a bad idea. Server cases are restrictive since they pack in as much hardware as possible. High speed fans are needed to keep up the airflow. Even if it works, you run the risk of having a single fan fail and overheating the server, where with faster fans, that might not have happened.
 
Icinga is up and running on a virtual machine. Since XCP is basically CentOS, I should be able to get monitoring working on the hosts, as well.
 
Can you get NRPE monitoring with Icigina or Nagios on XCP? Yup. It was actually pretty easy. Instead of doing the compiling directly on the servers, I created a CentOS 5.7 VM, updated it, compiled all the plugins, tarballed the directory, and threw it on my internal web server. I then installed xinetd, untar'd the file, and setup the services.

Now I can monitor my hosts directly, in addition virtual machines.

I can probably throw together a script if anyone is interested.

icinga.png
 
Back