• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

2 or 4 perps in matrix raid 0?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

v8440

Member
Joined
Sep 5, 2001
Hi all,

In fleshing out the details of my upcoming new machine, I'm now wondering whether I should get 4 16 mb 320 gig perp drives and put them in raid 0, or just two. My initial understanding was that 4 drives would be considerably quicker than two, but I've since read some things that would indicate the random seek time would suffer with 4 drives. However, reading an article referenced elsewhere in this forum seemed to indicate that while there would be some random seek performance, it could be minimized by increasing the data chunk size (I forget the correct term for that).

Meanwhile, the author of the sticky about running these perp drives in matrix raid 0 using the fastest part of the drive mentioned that he found performance was best with a chunk size of 128 kb. This would seem to indicate that the hit against random seek time would be minimal. Is this correct?

If not, should I maybe look into setting up 2 raid 0 arrays with 2 drives each instead of one array with 4?
 
Hmm.. Im not very knowledgeable with RAIDs, (especially new kinds ie 'matrix' RAID) but I have 2 ways of thinking.. if you put all 4 drives in RAID, you get the higher chance of having a drive failure, the higher seek time due to there being more disks to check, but you get the bonus of not having as much CPU time used in keeping the RAID alive.

I guess you have the exact opposites with the 2*2 config, most noticeably the increased CPU usage (I would imagine, having to control 2 arrays), the lower seek time because they only need to check 2 of the drives instead of 4 etc etc.

data chunk size (I forget the correct term for that).
Think you're referring to cluster size.

This would seem to indicate that the hit against random seek time would be minimal.
I would have thought that its more like "there is minimal performance loss, even with the higher seek time".. But as I say, Im no wizz with RAIDs :)
 
It's called stripe size for RAID but is conceptually the same as cluster size I believe.

I'm actually interested in this topic atm. If 4 drives aren't much different than 2, a 4-drive RAID10/RAID5 would sound ideal.
 
My opinion is that 4 HDDs will give you more sequential performance, but you will not see noticeable drops in access times.
 
4 HDDs will give almost twice the avg. transfer speed /STR, "but" with "slightly" lower access time compared to 2 HDDs as Travis said.

Will these difference affecting the "overall" performance ? Only you can answer that since it is heavily depends on your working/usage style on the rig. A highlevel benchmark like PCMark will give you a hint which arrangement/sweet spot is suitable for you.

Real life case I spotted so far, on equal Raid 0 slice, a heavily fragmented 4 HDDs Raid 0 performance will be beaten by 2 HDDs Raid 0 which routinely defragmented.
 
a 4 drive raid 5 will have a higher transfer rate than a 2 drive raid 0 you also have redundancy. so when one of your 4 drives fail you will still be ok with no data loss because of the parity. you just rma the dead drive and pop it back in your array when the replacement comes in.

the more spindles you have the higher your access time will be. but the difference is minimal. i personally will be running a raid 0 for OS and a 4 drive raid 5 for storage.
 
Ok, but did I understand correctly that the matrix raid controller will allow you to set up a raid 0 array and a raid 5 or 10 array on the same 4 disks, AND will automatically copy the contents of the raid 0 array to the 5 or 10? If so, wouldn't it be faster to have the 4 disk raid 0 array with it's very high str, while having the protection of being able to restore from the 5 or 10 array?
 
Back