• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Does anyone here have a RAID5 or RAID6

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

thegreek

Member
Joined
Dec 26, 2004
Location
Philadelphia
Does anyone here run a RAID5 config? If so, how does it perform if one drive goes bad? Have any of you tried taking a drive out and sticking in another to see if it rebuilds the array with no problems and how fast? I was thinking about going with a RAID6 for its extra protection but wanted to hear some advice from the people who already have RAID5 or RAID6. Below is a quote which explains RAID6.

*Note: I will be using a hardware RAID controller and not software RAID.


A RAID 6 array is essentially an extension of a RAID 5 array with a second independent distributed parity scheme. Data and parity are striped on a block level across multiple array members, just like in RAID 5, and a second set of parity is calculated and written across all the drives.

RAID 6 provides an extremely high fault tolerance, and can sustain two simultaneous drive failures without downtime or data loss. This is a perfect solution when data is mission-critical.

RAID Level 6 requires a minimum of 4 drives to implement
 
How fast they run with a failed drive and how fast they rebuild depends on the XOR chip, the size/speed of the drive and the available onboard memory on the card. Raid 5 chips are becoming common but I havent seen too many raid6 cards yet on the cheap. You have any in mind?

JT
 
Your most likely better off going with 4 disks in a RAID 10 to be honest. I would personaly just do 2 RAID 1 arrays. The cost of those controllers are pretty expensive and RAID 5 is not that fast unless your running like 6 disks or more. RAID 6 is used when information is critical and it's not backed up (Most critical information is backed up from a RAID 5 array...and the probability of an array failure from loss of 2 disks is incredibly small, it's more likely something went wrong with the controller).

What is your goal and budget?
 
My goal is to setup up a file server with a Areca ARC-1220 x8 SATAII PCI-E controller so I can store/share files to all my computers. It looks like RAID5 is a pretty good config for this because if one drive fails I can just stick in another one without any data loss. the reason I mentioned RAID6 is becuase it offers additional protection by having 2 drives as parity instead of one which would come in handy if more than 1 drive fails at the same time.

A RAID1 is very good but it'll cost more. Say I have a 1TB array, I'll need another bunch of drives to equal 1TB just to back it up, right?
 
^ With RAID 1, yes. RAID 1 is less efficient in terms of storage space saved in comparison to RAID 5/6. The upside is that you can use cheaper controllers, and the performance is generally better.
 
For yoru purposes (pure file server) raid 5 is a fantastic choice. Its what I currently run on my domain controller and file servers. I only run raid 10 on my terminal server.

Raid 5 can take quite a while to rebuild if a drive fails, but depending on how many drives are in the array, you probably won't notice the impact on performance for a file server. If your raid 5 is going to be on the order of 3 or 4 drives, its not pretty if one goes out- but larger arrays of 5 or more drives handle a single failure pretty well.

Depending on your raid controller, if it will let you dedicate one drive to be a hot-swap backup, definitely do it. It takes a lot of the headache out of rebuilding the array and replacing the failed drive.
 
when you say quite a while to rebuild, can you give an estimation (I know it depends on many things) but a rough estimation would be useful. 30min, 1 hour, 5 hours, more?
 
well 6 400gb HDD's would cost 1200 bucks and provide 1.2 terrabytes of RAID 1 or RAID 10 storage

4 400GB HDD's in RAID 5 would cost 800 bucks, + the 600 dollar controller.

The break even point is 5 400GB HDD's and a controller or 8 400GB disks in RAID 1 10.

For a 8 or 6 disk RAID 1 or 10 you can get a relatively inexpensive (40 bucks) controller.

The advantage of going strait to a RAID 5 is the ability to expand an array. Knowing what your doing and what it will cost, RAID 5 is the way to go I believe. Most people don't know or aren't willing to plopdown 1k+ for a storage solution.

If your gonna go with that controller, I'd go with that controller and 5 maxline III 300gb drives. That would give 1.2 TB at a cost of 1250 (600 for the controller), and the ability to expand to 2.1TB, if 2.1 is more than you think you'll need for a long time (remember...a LONG time, this controller should be around for quite some time) then those drives will work well. If not, consider moving to a 400 or possibly 500, but the prices for the drives ramp up quick right now. As for SATA I/II, It really doesn't matter. Your bandwidth is from the drive to the controller and no sata drive will even hit 100mb.s sustained. SATA II's NCQ isn't really helpfull unless your server gets hit by multiple users at once all the time. (in fact it's worse if the screnario I said isn't happening).
 
Although I'm not using a hardware controller for it I have a Linux software controlled RAID 5 with 6x400GB drives and the performance is FAR more than adequate. I get ~102MB/sec sustained reads (according to hdparm). If a drive fails the array just keeps going and I honestly had no idea until the system emailed me that 1 had failed :D. Reading max speed from the array over 10/100 uses ~.03% of the cpu power and writes use the same. If a drive fails and the array runs in degraded mode the numbers jump to ~5%. Once a new drive is added and the array rebuilds I get ~10% cpu usage. This is on an A64 3000+ btw. Hope that helps a bit :)
 
The reason people like RAID 6 is that the most likely time for a 2nd drive to fail in a RAID 5 array is when the array is rebuilding itself after a first failure. The rebuilding process is very stressful on drives.

I have a Promise S150 SX4, with a 256MB PC133 ECC CL2 SDRAM memory module on it. Four 200GB Seagate 7200.7 drives in RAID 5 give me ~600GB. I tried pulling a drive out when I first built it a year ago. I'm not sure how long the rebuild took - I started it in the evening, went to bed, and it was done the next morning.
 
kaltag and jclw's experiences are what you can expect with raid-5. rebuild can take up to 24hours depending on the size of your array and how much cpu is dedicated to it, but the nice thing is you hardly notice that anything is happening,aside from the sound of constant drive access :shrug:
 
kaltag said:
Although I'm not using a hardware controller for it I have a Linux software controlled RAID 5 with 6x400GB drives and the performance is FAR more than adequate. I get ~102MB/sec sustained reads (according to hdparm). If a drive fails the array just keeps going and I honestly had no idea until the system emailed me that 1 had failed :D. Reading max speed from the array over 10/100 uses ~.03% of the cpu power and writes use the same. If a drive fails and the array runs in degraded mode the numbers jump to ~5%. Once a new drive is added and the array rebuilds I get ~10% cpu usage. This is on an A64 3000+ btw. Hope that helps a bit :)
Can you give us/me more info on how you did this? :)

Thank you.
JT
 
I run a RAID 5 on 6 10,000 RPM SCSI disks and i love it :) dont know what kind of speeds im getting as its only an adaptec 3000s controler so not the fastest thing in the world, that and its only U160 even though the drives are all U320. I can say that the bigger your array the longer it will take to build, last time i made a 1.2TB array it took 28hrs to build and then one drive faild with in days so i spent another 28hrs rebuilding the array :( My roommate is running a 4x320 RAID 5 as his file server on that same card i think... honestly though for the cost i would skip SATA 2 and go right for SAS. LSI makes a killer 8 port PCIE SAS card and its only 750. That gives you the option of using SATA, SATA 2, and SAS drives so you have the full range of speeds from 5400 RPM to 15,000 RPM :)

This is the card im going to be using as soon as i get my reseller contract from them :)

http://lsilogic.com/products/megaraid_sas/megaraid_sas_8408e.html

The LSI + 3/4 WD 150GB Raptors is a killer price/performance set up, not as good for mass storage but really good for video and HD content production.

//edit just tested my RAID 5 using hdparm

hdparm -tT /dev/sda

/dev/sda:
Timing buffer-cache reads: 1596 MB in 2.00 seconds = 798.00 MB/sec
Timing buffered disk reads: 40 MB in 3.15 seconds = 12.70 MB/sec
 
At work we typically set up RAID5 with 4 disks. 3 for the array and one spare. 2 disks can go bad. Speed for rebuilding varries. I've had some RAID1 arrays rebuild in a few hours and I've had a RAID5 array take a whole day.

Thats a nice card infinitevalence. I have a Dell 6 Channel CERC card which I beleive Adaptec makes. 64 bit PCI.
 
ehh... even in a PCI-X slot i get crapy read/writes :( im really looking to going SAS i just dont think its in the cards for my home array as i cant quite justify the $$ i really dont need a 10k or 15k SAS array seeing as i already have a 10k SCSI array.

But for work :) as soon as i can throw a SAS card in and some 15k drives i will thats for sure.


//edit
My card
Link
popupimage.jsp
 
Last edited:
JTanczos said:
Can you give us/me more info on how you did this? :)

Thank you.
JT
I sure can. Build a standard 2.6 kernel and make sure to include MD support for the RAID levels you want. I then used mdadm to create the array after setting the partition on each drive to RAID Autodetect. After creating the array the kernel will automatically recognize it and start building it according to the options you passed to mdadm. The specific command I used was
Code:
mdadm -Cv --force /dev/md0 -l5 -n6 /dev/hdc1 /dev/hdd1 /dev/hde1 /dev/hdf1 /dev/hdg1 /dev/hdh1
-l refers to the RAID level, 5 in my case. -n refers to the number of partitions in the array, 6 in my case. -Cv means create and verbose output. --force makes the array sync all 6 drives at once, otherwise it will sync the first 5 THEN add the last one. After that /dev/md0 is the partition to format and mount. Hope that clears that up a bit.

PS. sorry for digging up an old thread :D
 
I'm also considering running a Raid5 for my home server mostly to store movies. The way I understand it is I can add drives as I go. Is this right?
Basicly I want to buy a full tower case with about 15 slots. Start with 4 or 5 drives then add as I go. If all the drives are the same size can I do this? Start with 5 and work up to 15?
Or would I be better off with a set number of drives fill those and start another server?
 
chris6104 said:
I'm also considering running a Raid5 for my home server mostly to store movies. The way I understand it is I can add drives as I go. Is this right?
Basicly I want to buy a full tower case with about 15 slots. Start with 4 or 5 drives then add as I go. If all the drives are the same size can I do this? Start with 5 and work up to 15?
Or would I be better off with a set number of drives fill those and start another server?
Yes, just make sure the controller you pick supports online array expansion.

The drives don't have to be the exact same size, as long as they are bigger then the smallest existing drive.
 
chris6104 said:
I'm also considering running a Raid5 for my home server mostly to store movies. The way I understand it is I can add drives as I go. Is this right?
Basicly I want to buy a full tower case with about 15 slots. Start with 4 or 5 drives then add as I go. If all the drives are the same size can I do this? Start with 5 and work up to 15?
Or would I be better off with a set number of drives fill those and start another server?
you'll need a hardware RAID controller that supports this functionality, Windows/Linux software RAID does not safely support doing this
 
Back