• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

RAID 5 Questions

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Darur

Member
Joined
Aug 15, 2006
Please pardon my noob. :)

Reccently I've had some horrible luck with losing data, and I've been considering RAID for a while now for my main rig. I know RAID 0, 1, and 0+1 are the norms for non-server RAID, but I've been reading up on RAID 5 and seems perfect for what I want with a mix of speed and data protection. The only major downside I see for me is its going to be expensive.

Right now I'm considering 3 500GB Seagate 7200.11 drives for my array with plans to add 2 more. I've read up on setting stripe sizes and formating the disks and from what I've heard a 3 or 5 drive array is best.

I'm not entirely sure if I'm going to do RAID, but I've been reading through threads and the stickies and researching and there are a few questions I have.

Aside from taking hits in performance from small file writing and the cost, is there some reason RAID 5 would be bad for general use? (Gaming, photo/video/audio editing, etc.)

Do I need a RAID controller? The best option seems to be ARC-1220, and it seems like it would allow headroom for upgrading, but I'm not sure if it would really be beneficial. My motherboard supports RAID 5 but I've heard bad things about using built-in RAID controllers.

Is there anything else I should know?

Thanks guys for any input!
 
your best bet would be to modify xp so it can do software raid 5 and you dont need a dedicated controller and you can span the array over different controllers but it is a bit slower.
 
First off RAID 10 (1+0) is very likely on servers. Raid 0 is more of an enthusiast thing, most businesses would crap a brick if they went with RAID 0 and lost data, lol.

I have a 1.5TB RAID 5 array (1TB usable, see sig). I am still looking for benchmarking software for Linux to test this thing out, but I do run all my games from that array. I've seen no slow down. In fact the theory behind RAID 5 is that you should see similar read speeds to RAID 0 since you are reading from up to three drives at the same time. It's the write speeds that take a hit. But it really isn't all that much. I'm using software RAID and my built in SATA ports, I see a 2 - 12% CPU usage when doing large writes. On a dual core system that is nothing. If you have the ports don't waste the money on a hardware RAID controller.

As for how many drives, well three is the minimum for RAID 5. Your read on larger files may go up with the more drive you have, and your writes could go down. I'm not too sure about it scalability wise. Five on modern hardware if all connected to the same controller (on board or not) should still be pretty good with a minimal change in speed from three. If you split the array across two controllers than that could take a big hit.

If I missed anything I'll get back to you, I'm out of time to type :p
 
I definetely have the ports for the RAID arrays I'm looking at, although aside from using system resources, will I notice any significant drop in performance with RAID from the motherboard?

I'm considering buying the drives now and setting up a RAID array from the motherboard, then buying a controller and moving the drives to that. I know I'll have to rebuild the array completely and start from scratch, but other then that would there be any major downsides?

I'm also a little confused now about read and write speeds. I've been reading a few more articles and is the hit to write speed on a RAID 5 array such that its slower then RAID 0, or is it actually slower then just using a single drive? I'm not too worried if the latter is true, but I want to be sure before I do anything.

Thanks folks!
 
I am running RAID 5 on an Lanparty NF4 Ultra D with 3x120 GB caviars. I had been running off a RAID 0 array with two of the drives, but had a spare and added it in for some data protection. I cannot tell the diff in speed and benchmarks seem to support that.
 
I definetely have the ports for the RAID arrays I'm looking at, although aside from using system resources, will I notice any significant drop in performance with RAID from the motherboard?

If you have a good chipset than I doubt you would see any difference vs a dedicated card, unless that card offered hardware raid.

I'm considering buying the drives now and setting up a RAID array from the motherboard, then buying a controller and moving the drives to that. I know I'll have to rebuild the array completely and start from scratch, but other then that would there be any major downsides?

If using software RAID than you shouldn't have to rebuild the array, make sure you label the drives and know which one is which logically to the computer. Unless you have your OS on the array than you may need to rebuild it (in Windows anyway).


I'm also a little confused now about read and write speeds. I've been reading a few more articles and is the hit to write speed on a RAID 5 array such that its slower then RAID 0, or is it actually slower then just using a single drive? I'm not too worried if the latter is true, but I want to be sure before I do anything.

It would be a hit on the write speed only. Honestly I do not see any difference from my single drive and my array. Really though when you think about it, any time lost for the CPU to process the data should be made up by the fact that you are striping data across multiple drives (just like RAID 0) so you are writing two or more times the amount of data at once vs what a single drive could do.

I think when most people talk about RAID 5 being slow its just old information passed down from the Pentium Pro days, or something is wrong with their setup.
 
If your getting a raid controller card, pony up and get hardware raid, cut out the software it will only slow you down and could give you problems
 
If your getting a raid controller card, pony up and get hardware raid, cut out the software it will only slow you down and could give you problems

Seriously, like I posted above I am seeing no slow downs or any issues to speak of with my software RAID.

Other factors could slow you down, like two SATA controller chips on one board to give extra ports. Research your current hardware to make sure your not effected by this, but I have not seen any issues with my setup.
 
software raid 5 may have good reads but writes will be slow.

Real hardware raid cards designed for raid 5 (or even 6 nowadays) have XORs and cache ram to run the commands on to generate parity.

For instance my main VMware server can do 140Mbs WRITES!

It may not be piratical to buy a hardware raid card with an XOR just stick to raid10 or 0+1 or what ever the on-board does.

Often SQL servers will run raid10 with 4 disk sets and use SQL server software to stripe the data across these arrays. That way the raid sets are kept small so multiple failures will most likely effect multiple arrays and not nuke your DB
 
"Software" Intel MATRIX RAID-5 will hit 100MB/s Writes on one of my NAS PC's with 4x 320GB in RAID-5, and my Hardware ARECA ARC-1210 RAID-5 will do 250MB/s Writes :drool: .

I've only been running the Software Matrix RAID-5 NAS for a few days, and I hear that some people have reliability issues with it - and it obvioulsy eats up some CPU Cycles for the Parity Calculations (not a problem for my NAS ;) ). I can say my Hardware RAID-5 has NEVER given me a problem over the past 3 or so years and dozens of OS Re-Installs - and I even transplanted the entire Array + Card to a new PC!

I'd say anything around 100MB/s should be plenty for any type of "Home" useage IMO (even over a Gigabit LAN). If you are extremely fanatic like me, or are really using this in a high I/O mission-critical environment, then Hardware is the only way to fly IMO...

:cool:
 
Also, using onboard software RAID will cause you to loose any data when you decide to upgrade boards. Which kind of defeats the purpose of keeping that data around doesn't it?

You might get lucky and have the same line of southbridge and it might move to the new board, but your chances are a crap shoot at best.

I'm getting a 3ware card and creating a RAID5 array solely for data while my OS is on a pair of Raptor's in RAID1 off of the software raid on my mobo.

You really won't find many people that recommend software raid 5 on this board.
 
Also, using onboard software RAID will cause you to loose any data when you decide to upgrade boards. Which kind of defeats the purpose of keeping that data around doesn't it?

Is this a Windows thing? RAID software should be hardware neutral. Tell it the ports and drive order and the array will reassemble.
 
matrix raid will work from one to another, but again hardware is the right way to go, I have had my matrix raid BSOD because of the software and it causes it to rebuild itself which takes about 12 hours where you cant really use the drives. I have not lost any data yet so it works fine but there are better ways to do it.
 
Hrmm, right now I'm thinking about the following set-up:

3x 500GB drives in RAID 5 off my mobo's main SATA ports
OS and Swapfile on a 320GB on the second controller.

Once I have the money, I'll pick up an Areca 1220 ands then move the drives to that. I definetely want Hardware RAID, but at the moment I can't quite afford it.

Would that be feasible/recommended?
 
Moving your array from one controller card to another is an iffy proposition at best. Chances are that the new card won't recognize the array.

If you're considering spending the money for the Areca 1220 to do RAID 5, you may just consider doing RAID 10 from the start, as the cost will be similar. From everything I've seen and read RAID 10 has read times equivalent to or better than RAID 5 and vastly better write times across the board.
 
Last edited:
I am running RAID 5 on an Lanparty NF4 Ultra D with 3x120 GB caviars. I had been running off a RAID 0 array with two of the drives, but had a spare and added it in for some data protection. I cannot tell the diff in speed and benchmarks seem to support that.

Are you running that on the nvidia ports or the sil southbridge?
 
Moving your array from one controller card to another is an iffy proposition at best. Chances are that the new card won't recognize the array.

If you're considering spending the money for the Areca 1220 to do RAID 5, you may just consider doing RAID 10 from the start, as the cost will be similar. From everything I've seen and read RAID 10 has read times equivalent to or better than RAID 5 and vastly better write times across the board.


Write times with a acrea and raid 5 will be very good, over 200mb/s (4 320gb perp seagates) and raid 5 will allow more overall storage, the reason to get the acrea is that it is faster and better if you want raid 10 stick which the onboard controller, it is just mirroring the raid 0 it does not have to compute parity, now an 8 drive raid 10 would be sweet, performance of 4 drives in raid 0 but mirrored, to me raid 10 is for lots of drives money is not an issue and you want and instant backup with no rebuilding (server) for home raid 5 is great for security of data


With 4 drive raid 10 you get the performance of a 2 drive raid 0 (so if that performance is enough just add 2 more and you have a mirrored raid 0), a 4 drive raid 5 will be faster by a good amount, using my southbridge the difference is about 50mb/s faster in a 4 drive raid 5 then a 4 drive raid 10. With the acrea you could get even better performance in raid 5. using 3 drives in raid 5 was a good bit slower for me, the forth drive added 50mb/s.

4x500 drive raid 10 = 1tb
4x500 drive raid 5 = 1.5tb
 
Last edited:
Moving your array from one controller card to another is an iffy proposition at best. Chances are that the new card won't recognize the array.

If you're considering spending the money for the Areca 1220 to do RAID 5, you may just consider doing RAID 10 from the start, as the cost will be similar. From everything I've seen and read RAID 10 has read times equivalent to or better than RAID 5 and vastly better write times across the board.

I'm not terribly worried about having to rebuild the array. If I can hook the hard drives directly to the card that would be ideal, but I doubt I'll fill up a terabyte by the time I get the card, and even if I did I have enough space on other drives to move important stuff out.

RAID 10 looks interesting, but it doesn't seem cost efficient at all. I couldn't justify having 4 500GB drives and getting only 1 TB. I don't mind a comfy balance between performance and space.

It looks like I'm going to go ahead and get the drives. Thanks for all the input folks!
 
Back