• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Question about software RAID controllers

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Raistlin

Member
Joined
Nov 26, 2004
Location
Illinois
OK...
I'm not really sure about this and i can be totally wrong. Can Windows XP pro configure HD's too RAID 0, 0+1 without the use of a controller (built-in or PCI)? If so what is the point in a (software)controller card...? Other then the ability to put the OS on the RAID drive? or is that not correct... please help... this question is killing me.
Thanks
 
well if theres such thing as a software controler card its called a hardware controler card... Yes windows xp pro, and windows server 2003 can do a software raid 0 and 0 +1 in windows xp pro and abunch more in windows server 2003....

other than that i have never heard of a software controler card...

Also a software raid allows you to run a raid setup on hardware that wouldnt normally allow it, IE an older board with no sata raid or ide raid.
 
what i was meaning by the "software" controller card... is like on-board raid devices or the cheap pci controlers... they are all considered "software raid" correct? Not the cards with on-board processors and mem. " hardware raid"
Am i right or just totally off.??
 
What you mean is software RAID, which can usually combine any block devices into a new block device which you create a filesystem on.

Very common with Linux and FreeBSD, and to my knowledge Windows XP can do 0 and 1, not sure about 10, and you have to pay for some ninja Windows version to get software RAID 5.
 
It's more commonly called "fake raid" or fakeraid (where the RAID code is predominantly in the driver), to distinguish it from real hardware RAID and also from OS RAID (which is not tied to a particular controller). The main benefit is that yes, you can do an install on to an array. Some fakeraid adapters actually have hardware acceleration for some operations, but these usually only provide a measuable performance boost for RAID1.
 
With today's CPUs you absolutely don't need hardware acceleration for raid 0 and 1 and variants thereof. The CPU overhead for those is totally in the noise of the filesystem operations.

Even with software RAID-5 the CPU load only goes up by what the filesystem code takes when I benchmarked with an Opteron at 2.9 GHz (and that was a single one).

Now, there are other advantages to hardware RAID controllers, but pure software RAID has it's advantages, too. The only thing I don't use is that onboard SATA raid styff. But CPU usage is not an issue anymore if you have a high-clocked modern CPU and raid-0/1 or a single raid-5.
 
Note that it's not so much CPU load as bus load. It's much more obvious with PCI (as opposed to PCIe or faster versions of PCI) but software RAID1 can really hammer your write rates through the network. For example, at one point I was messing around with a system with a gigabit ethernet card and two drives in RAID1, all on the same PCI bus. In this configuration, writing had a theoretical maximum speed of 44 MB/sec, but rarely got above 25 MB/sec. Using fakeraid (which duplicated the IO ops on the card) boosted the write speed to close to 50MB/sec (theoretical 66MB/sec). Since the volume of data being moved through this server was measured in terrabytes per day, this was most definately noticed.

But I completely agree with you that each type has it's own niche. Software RAID offers cheapness and some neat features (such as "RAID-z" with ZFS), Fakeraid gives you multiplatform support (eg: dual-booting between Windows and Linux and sharing the same RAID storage) and a possible performance boost especially in RAID1 mode, and hardware RAID gives you "enterprise" features (such as fail-over redundancy and BBUs) and a performance boost for many-drive RAID5/6 arrays.

Note one downside with hardware RAID is that it's one more CPU that can crash ... ever since I lost power a week and a bit ago, my Compaq 5302 card has been locking up when both it and the graphics card are put under load. The usual result is a complete system lock, though sometimes it staggers on sans the hard drives. It's getting to the annoyance point where I'm considering buying a new card.
 
emboss said:
Note that it's not so much CPU load as bus load. It's much more obvious with PCI (as opposed to PCIe or faster versions of PCI) but software RAID1 can really hammer your write rates through the network.

Correct, but anybody who uses SATA or GbE controllers that sit on the 32bit/33MHz PCI bus there days is insane.

Even a $80 A8V5X has 4 SATA port and one GbE in the southbridge.

It is correct to say that if you do RAID-1 and you are forced to do 32bit/33MHz PCI bus then a hardware controller is better because it only puts the load of the net bandwidth on the PCI bus. But in 2006 nobody has to do that anymore.

Here are my results of pure software raid (not the onboard SATA raid junk):
http://forum.useless-microoptimizations.com/forum/raid.html

As you can see, on a simple NForce4 board such as the above I reach speeds (and low CPU load) that beat most hardware controllers.

And even going directly from raid-0 to the GbE I get
- from network to disk: 17179863888 B 16.0 GB 186.21 s 92260914 B/s 87.99 MB/s
- from disk to network: 17179863888 B 16.0 GB 182.21 s 94287551 B/s 89.92 MB/s

The pure disk numbers on the page speak for themself.

Note one downside with hardware RAID is that it's one more CPU that can crash ... ever since I lost power a week and a bit ago, my Compaq 5302 card has been locking up when both it and the graphics card are put under load. The usual result is a complete system lock, though sometimes it staggers on sans the hard drives. It's getting to the annoyance point where I'm considering buying a new card.

Software such as the one I use has another huge advantage: you can plug in any disk, anywhere, or even any file used as disk. For example, if a disk in my raid on SATA take a dump I can use a P-ATA disk as a spare. Or a USB disk. Or a block device on a network drive. Or my mp3 player.

And the hardware controllers are expensive. Mainly because you need two. If you don't have a spare one on the shelf the whole exercise is pointless.
 
uOpt said:
Correct, but anybody who uses SATA or GbE controllers that sit on the 32bit/33MHz PCI bus there days is insane.

Even a $80 A8V5X has 4 SATA port and one GbE in the southbridge.

It is correct to say that if you do RAID-1 and you are forced to do 32bit/33MHz PCI bus then a hardware controller is better because it only puts the load of the net bandwidth on the PCI bus. But in 2006 nobody has to do that anymore.

Frankly, it's not insane to put high-bandwidth devices on legacy busses, nor is it something that's always avoidable. Try working for a cheap company sometime or doing volunteer work for nonprofits and/or charities. There's quite a bit of legacy hardware out there that still gets used heavily and cannot be replaced for a variety of reasons. In many of these cases it's not a bad idea to put a faster nic and SATA controller in a P3 or P4 system, knowing full well they won't run at 100% efficiency, so that gains can be had for minimal investment.

And the hardware controllers are expensive. Mainly because you need two. If you don't have a spare one on the shelf the whole exercise is pointless.

A spare controller is not requisite. A backup of system data is, though, granted, it depends on administrator philosophy and policy.
 
Why is the on-board SATA raid "junk"? Would it not depend on the chipset or the on-board controller? The reason i ask is because i was under the notion that like the promise "on-board" set (i.e. 3112r) was the same controller as the pci sata card. Is this not correct? or are there other draw backs?
 
Raistlin said:
Why is the on-board SATA raid "junk"? Would it not depend on the chipset or the on-board controller? The reason i ask is because i was under the notion that like the promise "on-board" set (i.e. 3112r) was the same controller as the pci sata card. Is this not correct? or are there other draw backs?

Yeah, some thing. Junk for any redundant raid, reasons mentioned above.
 
Back