• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Virtual Servers - Good enough?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

myststix

Disabled
Joined
Dec 20, 2007
Location
The Alamo city
Ok, so I've been playing around with Virtual PC. With the "virtualization" craze going on now, can I really get away with making 4 virtual PC's (or at least 3) on a on physical computer, e.g. Domain Controler, Backup Machine, Database, and/or Mail Server on a single physical machine? I can load up on RAM, and a multi-chain SCSI controller that "should" handle the bandwidth requirements?

Can an Intel Quad keep up?
 
I have not used Virtual PC, but have extensive experience with Vmware Virtual Infrastructure and their free Server product.

I find cpu and memory management is usually very good, but there can be big bottlenecks with disk and network IO. I have tested running database servers as virtual machines, and always ended up having to give up on the idea. Also context switching has been a bottleneck on VMware up till their latest ESX server products, I would not be suprised if it causes problems on virtual pc.

You can get the best results from virtualisation by combining servers that use up different resources, for example one is heavy on cpu, another is fairly heavy on disk IO and another is fairly heavy on network IO (heavy and fairly heavy of course is a flexible concept...). Even machines with high CPU usage are not a problem as long as the CPU usage is sporadic and not dependant on each other (for example running a distributed system with all virtuals on the same host, and all start loading each other might not be a good idea). In regards to CPU it's good to allocate them in a way that no single virtual (or several which are tied to one system) have all of the hosts cpu's allocated. This way no system can take up all CPU resources in case of congestion.
 
We used VMware workstaion in school to setup entire networks. Domain controllers, mail servers, linux workstations, etc. It's perfectly possible and has been done many times. Go for it! :)
 
We used VMware workstaion in school to setup entire networks. Domain controllers, mail servers, linux workstations, etc. It's perfectly possible and has been done many times. Go for it! :)

It works fine for testing, but not in a production environment. You really need to consider carefully what resources each VM will be using, and make sure that the machines running one the same hardware will not be fighting for them. Databases are still not a good choice for virtual machines, just to much need to directly access hardware to make it work as needed.

Lighter use machines, say a DC on a smaller network, would be a good choice. As would a file server. You really need to look very carefully at what the need of each machine is and will be before deciding to us a VM in place of an actual machine.
 
I have not used Virtual PC, but have extensive experience with Vmware Virtual Infrastructure and their free Server product.

I find cpu and memory management is usually very good, but there can be big bottlenecks with disk and network IO. I have tested running database servers as virtual machines, and always ended up having to give up on the idea. Also context switching has been a bottleneck on VMware up till their latest ESX server products, I would not be suprised if it causes problems on virtual pc.

You can get the best results from virtualisation by combining servers that use up different resources, for example one is heavy on cpu, another is fairly heavy on disk IO and another is fairly heavy on network IO (heavy and fairly heavy of course is a flexible concept...). Even machines with high CPU usage are not a problem as long as the CPU usage is sporadic and not dependant on each other (for example running a distributed system with all virtuals on the same host, and all start loading each other might not be a good idea). In regards to CPU it's good to allocate them in a way that no single virtual (or several which are tied to one system) have all of the hosts cpu's allocated. This way no system can take up all CPU resources in case of congestion.

Design it for your system

Quad core and many many GB of ram and you are set. All you need is access to data. This is why server machines are built the way they are.. They dont need DDR2 1066.. .(TBH neither does your gaming rig) but just need lots of it to keep it quick.

Add in huge amount s of IO's for disk storage... you will want SCSI or huge amounts of raptors...and scsi will work better for a lower cost.. over all

Disk storage is the bottle neck of even enthusiast class PCs... but is the biggest concern of servers.
 
Design it for your system

Quad core and many many GB of ram and you are set. All you need is access to data. This is why server machines are built the way they are.. They dont need DDR2 1066.. .(TBH neither does your gaming rig) but just need lots of it to keep it quick.

Add in huge amount s of IO's for disk storage... you will want SCSI or huge amounts of raptors...and scsi will work better for a lower cost.. over all

Disk storage is the bottle neck of even enthusiast class PCs... but is the biggest concern of servers.

I run vmware on 2* dualcore xeon (4 cores at 2.66GHZ) setups with 16GB ram, and 4*dual opteron (8 cores at 2.4GHZ) with 32GB of ram. Storage is a fast SAN cluster from Netapp.

My comparisons have to do with what the difference is to a physical machine hooked up to the same storagesystem. With regards to CPU and memory there is not a huge amount of overhead, but with disk and network IO there is quite a bit. This does not mean it would not work, actually I have one database running on a virtual and our companies corporate webpage (among other things) are run from it. The web page serves about four million page loads per month, so not very high traffic, but it works without problems. However if you need the maximum IO performance it's just about the only thing I would not virtualize for now (apart from some critical core network components).
 
Last edited:
It works fine for testing, but not in a production environment. You really need to consider carefully what resources each VM will be using, and make sure that the machines running one the same hardware will not be fighting for them. Databases are still not a good choice for virtual machines, just to much need to directly access hardware to make it work as needed.

Lighter use machines, say a DC on a smaller network, would be a good choice. As would a file server. You really need to look very carefully at what the need of each machine is and will be before deciding to us a VM in place of an actual machine.

:thup:

It will never be as good as having a *real* server, I don't care what vmware's marketing people say. And virtual machine security is not so hot. (skodo is the man.)

One thing about virtualization:
A smart friend of mine said:
Take a look at the current hot trend in system administration -
namely, virtualization. What's that all about? Basically, it's an
acceptance that OS configurations are so arcane that it is
easier to roll them back to a "didn't appear to be busted"
state rather than understand how they got to a busted state.
The operating system sucks, so we're just going to whack and revert it when we need to. I say, instead, use software that DOESN'T SUCK TO BEGIN WITH. That way you don't have to buy vmware or anything else - just the software you're using.
 
Well there are definate advantages to Virtualisation. I don't virtualise systems that are extremly heavy, not really any point (licenses are expencive and they end up using most of the servers capacity). However it's amazing how many low to mid usage systems you can bundle up.

Also the redundancy and flexibility you get from running a cluster with the ability to move virtuals from one machine to the other on the fly is amazing.

If you are doing something like runing stacks of servers with load balancing you don't really have anything to win. If on the other hand you are like most companies and have tens of semi-idle machines taking up your rack space you have everything to win. I'm running about 40 virtuals on one cluster of 4 servers, with enough extra capacity that one of servers can go down and the virtuals can still run. Before virtualisation I would have needed at least 30 physical servers to run the same applications.

We run our systems in a very secure space which is pretty expensive. Add to that electricity and maintainence fee's for servers and savings start growing. Especially now that memory is finally starting to be cheap, and servers have more cores then you can throw at your average application and the benefits should be clear.

But it's not a clear cut case, of course you can find situations where a traditional server is better. We still have a lot of physical servers not running vmware, and they probably never will be.
 
i am trying to think of the reason you can't run that on 1 server....

but i can't.
Other than separating roles of servers, and making it easier to diagnose and troubleshoot issues? Or if one system get compromised you aren't at the mercy of an attacker to not find everything else on that box? Besides, you should not expose your DC out to the internet for any reason.

A VMWare/Virtual system may not be the best idea in this case; I don't see the justification for so few machines/server roles. Individual servers would be a far better choice in this instance, unless you see yourself adding 4-6 servers within 6 months...
 
the one real advantage is if you have the vm's on a san, for example, and that server somehow tanks you can quickly and effectively bring that server back up on a different machine.


virtualization has its place :) its good to have the base image around for when something drastic happens you can get the server back up within a matter of 30-45 minutes depending on the situation vs the hours it would take to troubleshoot and fix the problem, if the problem is deemed fixable.

its still nice to have the peice of hardware in front of you tho i would agree to that. i used to run a virt exchange at home, i cared about it at first, then my need / want for it slowly dissapeared when i had done the vm installs so many times because i didnt like x setting or what have you.
 
When I worked for HUD, we were doing a lot of virtualization. Mostly it was webservers/webapplications. We were using Solaris 10 zones on T2000 servers.

These things are low power consumption 8 cores, 4 threads/core cpus. We were able to allocate resources directly to each zone. Solaris saw it as 32 CPUs. We could also designate a pool of resources as shared between zones and could setup scheduling to add/remove resources to certain zones at certain times. The servers had 32GB RAM. They used 4x Gigabit NICs so we could allocate certain zones to specific NICs when we set it up initially. We were actually trying to team the four interfaces into one when I left... not sure how that went. Storage was handled on the SAN. We were not using ZFS but were looking forward to it eventually/possibly replacing Veritas.

Adding to what has already been mentioned.... get multiple network interfaces for your virtual servers to have some exclusivity. Data throughput on your storage media will be the biggest concern. You can address this with add-in disk controllers and disks up to a point where you can't fit anymore. Network storage solutions are also workable.
 
Last edited:
Other than separating roles of servers, and making it easier to diagnose and troubleshoot issues? Or if one system get compromised you aren't at the mercy of an attacker to not find everything else on that box? Besides, you should not expose your DC out to the internet for any reason.

A VMWare/Virtual system may not be the best idea in this case; I don't see the justification for so few machines/server roles. Individual servers would be a far better choice in this instance, unless you see yourself adding 4-6 servers within 6 months...

yes....

but i'm gonna go ahead and throw it out there that this is probabbly for his house.

yell at me if im wrong.
 
the only idea i dont like about VM is if a mobo goes out, ram dies, HD dies,PSU goes up in smoke, something, all those system, are now down, vs if they were all on seperate systems.... that is when cost matters and all those savings go out the window
 
the only idea i dont like about VM is if a mobo goes out, ram dies, HD dies,PSU goes up in smoke, something, all those system, are now down, vs if they were all on seperate systems.... that is when cost matters and all those savings go out the window
But all you need to do is copy them to a new system and you're back up in minutes.
 
But all you need to do is copy them to a new system and you're back up in minutes.

Bingo!

Also with virtual machines you can take a snapshot before upgrading any software. If it works remove the snapshot later, if it borks just revert to what it was like before.
 
to just throw in my experience here...

i run vmware server on this computer:
asus ncch-dl -- 2 x intel xeon 3.2 (sl7td) -- 4 x 512mb pc 3200 -- ati radeon 7000 -- windows server 2003 standard sp2

it is running 3 virtual machines - two win2k3 domain controllers and one win2k3 exchange 2003 server.

while my memory usage is quite high on the server hosting these virtual machines, it hasn't degraded performance.

i still use the physical server for my file server as well. it works very well for what i am doing :)

if it continues to work well, i am going to join my workstations to the domain as well but i really didn't want to do that until i knew everything was 100% stable. i also don't really want to add the physical host to the domain since both domain controllers are running on it. maybe that could be something that i might consider if i add a third domain controller running on a physical machine...
 
I have one host running 4 Win2003 virtuals, and the vmware host (and application) has over 500 days uptime (host server running Linux). There has not been any problems with it, hence it has not been upgraded from 1.0 or the virtuals moved to Virtual Infrastructure. Having such a stable first release of a free application show's just how well their products work.
 
This is becoming a very interesting thread.

I am actually in the process of setting up a server to run virtual workstations at one of my client sites. This will be a first production type setup, and I am very anxious to see how it will go. Up to this point I have just been running virtual machines in the lab for testing and such.

My current lab "server" is an HP XW6200 with dual 3.4GHz Xeons and 8GB of RAM running Windows 2003 x64. I am in the process of loading it up with all different virtual servers to have an entire network on this one physical PC. The best part of it is I can have that in my basement and connect to control everything using the VMware console on a laptop. Another great thing with the VMWare setup is superior portability. I can work on a machine image on my VMWare server at home to get it perfected, and then put it on a CD/DVD/USB HD and bring it into any other server running VMWare and load it up.

Actually I am currently utilizing this technology to work on a desktop build. It makes the process very easy as I just provision two identical VMs and do a manual configuration on the donor, and then test the image process on the next. If something doesn't work out revert to the first snapshot, and be on my way.
 
Back