ajrettke said:
I highly recomend not even touching RAID 5. If your performance oriented DO NOT DO RAID 5. I've ran tests with 4 Ultra3 10k SCSI's in a RAID 5 array (64mb cache on controller) and the writes are slower than a single 7200 RPM IDE drive. now imagine a RAID 5 with IDE....I did that and the writes were 3mb/s on RocketRAID 404 controller.
RAID 0+1 and RAID 5 are for critical applications that cannot afford to go down or be lost. If you want to avoid the "hassle" of reinstalling windows because a drive dies (which is rare) your gonna regret it because the writes are dismall and will crawl.
You can go ahead and think I'm wrong, spend a hell of a lot of cash and find out your left with a lot of HDD's and expensive controller and ****ty performance. RAID 0+1 on IDE will read/write at around 90/25 mb/s which isn't bad....but that's 4 hard drives in your setup which is anoying to say the least. RAID 5 is uterly worthless IMHO (except for when cost and no down time are incredibly important ...SERVERS).
What stripe sizes did you use?
Have you tried with smaller stripe sizes?
raid-5 is very sensitive for the stripe size when it comes to writing.
Lets say you have 4 hdds in raid-5 with 16kb stripe.
Then when you write 48kb to the raid it write 16kb to 3 of the drives and then the xor of all 3x16kb blocks to the 4th drive.
This makes 4 writes of 16kb.
Now lets say you write only 16kb to the raid. The controller cannot write them to either disk before it knows what is the XOR of all 3 disk for this stripe.
So it has 2 choices:
1) to read the other 32kb from 2 of the 3 remaining disk,
calculate the parity,
and then write the 16kb to 1 disk and 16kb parity info to another.
This means 2 reads and 2 writes of 16kb.
2) it can wait hoping the other 2 or at least 1 of the 16kb in the same stripe group will get written as well. This means that the raid cache is very important for raid-5 setup.
So if you do a single stripe (16kb) write it would cost 2w + 2r
if you do 3x16kb in the same gruop it would cost 4w
but if you do 3x16kb with big time difference (which it couldn't cache) it would cost 3(2w+2r) = 6w + 6r. Now this is slow.
So if you increase the stripe size
1) there is bigger chance that you would write only 1 stripe instead of the all stripes in a group.
2) the cache would be able to hold less requests so the controller would be forced more often to use the slow way of writing.
Of course there are also reasons for making the stripe size bigger which I didn't even mention.
I hope this explains why some hardware raid cards have 512MB of cache, and why stripe size is very important in raid-5.