emboss said:
Note that it's not so much CPU load as bus load. It's much more obvious with PCI (as opposed to PCIe or faster versions of PCI) but software RAID1 can really hammer your write rates through the network.
Correct, but anybody who uses SATA or GbE controllers that sit on the 32bit/33MHz PCI bus there days is insane.
Even a $80 A8V5X has 4 SATA port and one GbE in the southbridge.
It is correct to say that if you do RAID-1 and you are forced to do 32bit/33MHz PCI bus then a hardware controller is better because it only puts the load of the net bandwidth on the PCI bus. But in 2006 nobody has to do that anymore.
Here are my results of pure software raid (not the onboard SATA raid junk):
http://forum.useless-microoptimizations.com/forum/raid.html
As you can see, on a simple NForce4 board such as the above I reach speeds (and low CPU load) that beat most hardware controllers.
And even going directly from raid-0 to the GbE I get
- from network to disk: 17179863888 B 16.0 GB 186.21 s 92260914 B/s 87.99 MB/s
- from disk to network: 17179863888 B 16.0 GB 182.21 s 94287551 B/s 89.92 MB/s
The pure disk numbers on the page speak for themself.
Note one downside with hardware RAID is that it's one more CPU that can crash ... ever since I lost power a week and a bit ago, my Compaq 5302 card has been locking up when both it and the graphics card are put under load. The usual result is a complete system lock, though sometimes it staggers on sans the hard drives. It's getting to the annoyance point where I'm considering buying a new card.
Software such as the one I use has another huge advantage: you can plug in any disk, anywhere, or even any file used as disk. For example, if a disk in my raid on SATA take a dump I can use a P-ATA disk as a spare. Or a USB disk. Or a block device on a network drive. Or my mp3 player.
And the hardware controllers are expensive. Mainly because you need two. If you don't have a spare one on the shelf the whole exercise is pointless.