I've been trying to figure out how I want to do these new Yonah virtual machine servers. The hard drives, unsurprisingly, are going to be the most difficult decision out of all of them. My main concerns are speed, ability to easily backup the virtual machines (including configuration files) and good use of the hard drive space.
My first idea was going to get 8 drives and do two RAID 10 arrays. This is going to be costly unless I go with outdated (and our of warranty!) drives. For speed, this is probably the best I can go. For backing up, I could rsync/scp to the file server; easy enough. The problem that I see with this, excluding cost, is that it isn't efficient use of the hard drive space. Even if I get ancient 250 GB hard drives, that leaves me with a massive 500 GB RAID 10 array. My current virtual machines take up a whopping 146 GB, including the over-sized 64 GB drive for the Windows 7 machine. I could easily get this below 75 GB for my current machines and then I would distribute this between three servers. That leaves me with 1 TB of space that I can't easily use. I don't like it, for that exact reason.
My more current --and I think more clever idea-- is to create a SAN using SCSI targets. I don't mean the full fiber insanely expensive stuff (I wish!), but instead using cheaper technology; a LAN cable. This would give me roughly 100 MB/sec of throughput for a virtual machine, which I see as more than enough. If I wanted to change the setup and go with something faster since I have 2 gigabit fiber cards laying around, I could easily upgrade if I can get a switch. Backups are also as simple with different drives. This also solves the problem of "misusing" hard drive space as I can slap in older drives for the OS for the virtual machines servers. I could assign 100 GB to each server and expand later (I hope) if needed. If I go this route, I need to re-think my current setup and change how the drives are used. I also need to research how these things work. If they are treated like a local disk (meaning you format and mount them like local disks), I should be able to easily do this.
If I go with my current idea, I could change the 2 TB Hitachi drives, which are currently in a RAID 10 array, to RAID 5/6 and use it how I've been using my current array. This would allow me to use my 1 TB drives for a RAID 10 array. 6 drives for the live RAID 10 array and a hot spare. After I partition off the SCSI targets, the rest could be used for storage or backups. This would allow me to easily expand my storage array by simply adding more 2 TB drives. On the down side, this is going to take a serious amount of my time in research and testing. Not to mention, I'm going to want to start the server's OS install over. I got a lot of crap on it that I don't need, and it would take far longer to remove it than to start over.
Finally, I think I've decided on VirtualBox for my hypervisor. I haven't mentioned much about me wanting to switch, but I've been hitting constant issues with libvirtd. I had issues with the Fedora 14 install hanging and found that it came down to how much memory the virtual machine had. If it was under 1024, the script simply crashed and nothing happened. Simple, I thought, as I'll just increase the memory to----. Oh, that is interesting, I got an error changing the memory of the virtual machine. I looked up the message and it is a known issue, great. The only way to solve it is to either delete the virtual machine and create a new one or to change the configuration of the virtual machine and bounce libvirtd. I've been lazy with this and have done neither. So, I decided to start looking around for a new hypervisor. I was thinking of ESXi or XenServer for the Yonah servers, but I have issues (ESXi is lacking in feature for the free version) with both. So, that leaves me with a non-bare metal hypervisor. VMWare Server is a joke on Linux with newer operating systems (web front crashes all the time). Someone mentioned VirtualBox and I didn't think much of it until I started looking around. The features this has for how much it costs (nothing) is incredible. For example, you can straight up pass PCI devices to a virtual machine. This should allow you pass a video card and sound card through and run a HTPC, in a complete virtual environment. I tried this out on one of my Yonah servers, but it doesn't have the proper chipset to do this (has to be ICH9 or newer). I'd love to try this out.
-------
So, I need to:
1) Research more on SANs, SCSI targets and how to use them.
2) Test out SCSI targets on the current server or on the virtual machine servers to see how they work.
3) Get any information out of the virtual machines before switching hypervisors.
4) Change the server's storage drives, if that needs changing.
5) Format the server and start over.
6) Configure and setup VirtualBox on all three servers.
7) Buy more 2 TB drives.