• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Advice on new Matrix RAID setup -- 4 x 640GB WD Caviar Blacks -- Good choice?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I think there are some semantics that need to be worked out. When I say RAID10 I'm referring to 4 drives (or 4 slices from 4 different drives) where they are grouped together into 2 pairs. Each pair is in RAID0 (for performance), and the 2 pairs are mirrored (for redundancy). RAID10 or RAID0+1 or RAID1+0 are the same thing as this afaik...someone correct me if I'm wrong here.

So, if you have 4x640GB Blacks...

Go into the Intel RAID BIOS (cntrl-I during boot) and first create a 560GB RAID0 array. Then create a RAID10 array w/ the remainder (2TB).

While installing the OS you can create all your partitions on the RAID0 array...make sure your first partition is OS for the best performance.
Thank you for the super-clear instructions! . . . I'll probably refer back to that when I'm setting up the arrays. . .

Concerning the RAID 0+1 versus RAID 1+0, I believe there is a difference. I put "raid 0+1 vs 1+0" in google, and found this article at the top:

http://www.pcguide.com/ref/hdd/perf/raid/levels/multXY-c.html

Here are the key statements about the topic. (The author uses an example of working with 10 disks):

-----------------------------------------------

"RAID 0+1 = RAID 0, then RAID 1: Divide the ten disks into two sets of five. Turn each set into a RAID 0 array containing five disks, then mirror the two arrays. (Sometimes called a "mirror of stripes".)

RAID 1+0 = RAID 1, then RAID 0: Divide the ten disks into five sets of two. Turn each set into a RAID 1 array, then stripe across the five mirrored sets. (A "stripe of mirrors").

In many respects, there is no difference between them: there is no impact on drive requirements, capacity, storage efficiency, and importantly, not much impact on performance. The big difference comes into play when we look at fault tolerance.


RAID 0+1: We stripe together drives 1, 2, 3, 4 and 5 into RAID 0 stripe set "A", and drives 6, 7, 8, 9 and 10 into RAID 0 stripe set "B". We then mirror A and B using RAID 1. If one drive fails, say drive #2, then the entire stripe set "A" is lost, because RAID 0 has no redundancy; the RAID 0+1 array continues to chug along because the entire stripe set "B" is still functioning. However, at this point you are reduced to running what is in essence a straight RAID 0 array until drive #2 can be fixed. If in the meantime drive #9 goes down, you lose the entire array.

RAID 1+0: We mirror drives 1 and 2 to form RAID 1 mirror set "A"; 3 and 4 become "B"; 5 and 6 become "C"; 7 and 8 become "D"; and 9 and 10 become "E". We then do a RAID 0 stripe across sets A through E. If drive #2 fails now, only mirror set "A" is affected; it still has drive #1 so it is fine, and the RAID 1+0 array continues functioning. If while drive #2 is being replaced drive #9 fails, the array is fine, because drive #9 is in a different mirror pair from #2. Only two failures in the same mirror set will cause the array to fail, so in theory, five drives can fail--as long as they are all in different sets--and the array would still be fine.
"

-----------------------------------------------


So I'm hoping that the Matrix controller does RAID 1+0 versus RAID 0+1. I suspect that it does RAID 1+0 because that is what is commonly referred to as "RAID 10" and I believe that some motherboards advertise it as "RAID 10."
 
Last edited:
I tested a server I just put together. I can't believe the performance of these 160 gig RE2 drives in RAID 1. The one on the right is 3 250gig RE3 drive in RAID 5.

The controller is a Areca 1120 8 port. Almost thinking that these are burst numbers
 

Attachments

  • Areca-1120.JPG
    Areca-1120.JPG
    30.1 KB · Views: 390
  • Areca-1120-RAID5.JPG
    Areca-1120-RAID5.JPG
    30 KB · Views: 389
Last edited:
I tested a server I just put together. I can't believe the performance of these 160 gig RE2 drives in RAID 1. The one on the right is 3 250gig RE3 drive in RAID 5.

The controller is a Areca 1120 8 port. Almost thinking that these are burst numbers
Nice! . . . Too bad RE drives aren't as cheap as the Caviars. . . I don't see myself spending the extra for those . . .

Can anyone confirm if the Cavair goes into a deep cycle, does the RAID array have to be completely rebuilt (which deletes all data) or will the RAID controller detect the drive when it's available again and recognize the array is still intact?
 
BTW Jason.. you might want to try trimming everything away from the screenshots a bit. It's a whole lot easier on the eyes.

Whatcha talking about Fritz? I have no screenshots in this thread.

:beer:

I agree that the blinding white light almost had me thinking I was dead! Don't walk into the light, Jason!

:D



Thank you for the super-clear instructions! . . . I'll probably refer back to that when I'm setting up the arrays. . .

Concerning the RAID 0+1 versus RAID 1+0, I believe there is a difference. I put "raid 0+1 vs 1+0" in google, and found this article at the top:

http://www.pcguide.com/ref/hdd/perf/raid/levels/multXY-c.html

Here are the key statements about the topic. (The author uses an example of working with 10 disks):

-----------------------------------------------

"RAID 0+1 = RAID 0, then RAID 1: Divide the ten disks into two sets of five. Turn each set into a RAID 0 array containing five disks, then mirror the two arrays. (Sometimes called a "mirror of stripes".)

RAID 1+0 = RAID 1, then RAID 0: Divide the ten disks into five sets of two. Turn each set into a RAID 1 array, then stripe across the five mirrored sets. (A "stripe of mirrors").

In many respects, there is no difference between them: there is no impact on drive requirements, capacity, storage efficiency, and importantly, not much impact on performance. The big difference comes into play when we look at fault tolerance.


RAID 0+1: We stripe together drives 1, 2, 3, 4 and 5 into RAID 0 stripe set "A", and drives 6, 7, 8, 9 and 10 into RAID 0 stripe set "B". We then mirror A and B using RAID 1. If one drive fails, say drive #2, then the entire stripe set "A" is lost, because RAID 0 has no redundancy; the RAID 0+1 array continues to chug along because the entire stripe set "B" is still functioning. However, at this point you are reduced to running what is in essence a straight RAID 0 array until drive #2 can be fixed. If in the meantime drive #9 goes down, you lose the entire array.

RAID 1+0: We mirror drives 1 and 2 to form RAID 1 mirror set "A"; 3 and 4 become "B"; 5 and 6 become "C"; 7 and 8 become "D"; and 9 and 10 become "E". We then do a RAID 0 stripe across sets A through E. If drive #2 fails now, only mirror set "A" is affected; it still has drive #1 so it is fine, and the RAID 1+0 array continues functioning. If while drive #2 is being replaced drive #9 fails, the array is fine, because drive #9 is in a different mirror pair from #2. Only two failures in the same mirror set will cause the array to fail, so in theory, five drives can fail--as long as they are all in different sets--and the array would still be fine.
"

-----------------------------------------------


So I'm hoping that the Matrix controller does RAID 1+0 versus RAID 0+1. I suspect that it does RAID 1+0 because that is what is commonly referred to as "RAID 10" and I believe that some motherboards advertise it as "RAID 10."

Good info there! I was actually thinking along those lines after I posted that, but I've just never seen the distinction made. Iirc, the Intel Matrix BIOS only gives you a few choices; RAID0, RAID1, RAID5, & RAID10. Since RAID1+0 and RAID0+1 have the same performance it only seems logical that RAID0+1 has been sent to obsolete-ville as the extra redundancy built into RAID1+0 (RAID10) makes it the far better choice in every situation.
 
There are PLENTY of people that are using Caviar Black's in Raid.


However, if your drive should fail within the RAID, it is not covered under warranty, is what he was saying, because they're not RE type drives. They're not designed to be in a RAID setup. However, I think its a bunch of crap. My old 40gig ATA drives worked great back when I was using nvidia RAID, and they were basic drives.. My buddy has 4 x 640gig blacks, and work great in RAID 0.

My avatar will be changing... I just got another 3 drive back-plane and have two 150gig raptors in RAID 0, short stroking. It should look cooler. ;)
 
Last edited:
I just run 2 raid 0 volumes on my 3 raptors. I couldnt care less about raid 1, Im just in it for the performace, and the hardware partition :D

Saves me about 6 hours of installing games lol :thup:
 
I tested a server I just put together. I can't believe the performance of these 160 gig RE2 drives in RAID 1. The one on the right is 3 250gig RE3 drive in RAID 5.

The controller is a Areca 1120 8 port. Almost thinking that these are burst numbers

Nice numbers. Those have to be bursts though. No way a SATA drive can hit those numbers on reads or writes on the 4k blocks.
 
Nice numbers. Those have to be bursts though. No way a SATA drive can hit those numbers on reads or writes on the 4k blocks.

maybe not.... I have a dedicated RAID card with 256MB of RAM, It could be correct... Because everyone else is getting much slower results..
 
Back