• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Whats a better choice?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

bluz

New Member
Joined
Feb 16, 2011
For massive storage? Build a file server that holds 14HD's and uses a onboard motherboard raid and a 8 port raid card (someone noted that this does work, but then I've been told the hardware raid kicks off the motherboard raid system, who's right?)

Or getting one of these,
http://www.newegg.ca/Product/Produc...11141&cm_re=sans_8_bay-_-16-111-141-_-Product

Or possibly two of the last one.

With this set up I could do raid5 in both, if the mobo and raid card will work both at the same time I could easily just use the file server or get two raid cards....what should I do? I also don't plan on building this for about three months, but might start buying parts for it already like the mobo and OS drive (will be an SSD running freeNAS) and setting up the hard drive bays/racks etc etc. Thanks for the help.
 
First of all any reason that you went freeNAS over unraid or WHS? Personally I think that unraid is the most flexable although is not free. However you will save quite a bit in the end by not needing to buy complex raid cards.

Basically the first issue to tackle is hardware vs software raid. Hardware raid requires all drives to be the same to achieve decent performance which limits your expansion options down the road, however hardware raid can offer faster speeds. Personally I went WHS due to easy of use, although if I where to start over I would go unraid, and its plenty fast enough to stream uncompressed blu ray rips to multiple computers at the same time which is all the speed that I need. If you need it to be faster then you have to go the hardware raid route.

Oh ya I would go for building your own as it offers far more flexibility vs prebuild or NAS

edit: also going the hardware raid route on top of limiting you to buying the same drives also limits which drives you can safely use as many of the cheaper drives have issues in a hardware raid environment that make them impractical and untrustworthy for long term data retention.
 
Yeah, what's the purpose of this server?

Keep in mind that Gb Ethernet is ~125MB/s max. That's about the speed of a single drive nowadays, so the speed benefits of RAID5ing multiple drives will be lost, and you gain all the problems and expense that go along w/ RAID.

+1 for unRAID! No need for expensive RAID cards; you can use any sized HDDs and add drives slowly over time as they become cheaper (this is also a plus for HDD reliability b/c if you buy all the drives at once you are more likely to have multiple simultaneous HDD failures); more fault tolerant than RAID5 (multiple simultaneous HDD failures do not wipe out all your data); more Green than any RAID since the drives can spin down when not in use; the list goes on and on...

http://www.lime-technology.com/home/87-for-system-builders


If you want more OS functionality try WHS, or maybe look into FlexRAID (I haven't checked this one out in detail yet, but it looks interesting).

For my unRAID server I went w/ the CoolerMaster 590:

http://www.coolermaster-usa.com/product.php?product_id=2709


What's great about this case is the entire front is (9) 5.25" bays. To install 3.5" devices you use adapters, but since this is my file server I use all the bays for HDD cages. 9 bays means I can fit (3) 4-in-3 HDD cages in there and each cage has a cool and quiet 120mm fan (most other solutions use much louder 80mm fans that don't cool nearly as well). I used some Yate Loons, and have them wired for 5v or 7v depending on the season. So, I can fit 12 HDDs in there, and since unRAID runs off a flash drive I don't have to worry about another HDD for the OS.

My mobo (MSI P43 Neo3-F purchased for $60 open-box on NewEgg) has 6 Intel sata ports, and 2 Jmicron sata ports. I added a PCIex1 2-port sata card ($15 on monoprice), and a PCI 4-port sata card ($30 used; only using 2 of these ports) to give me enough ports for 12 HDDs. Close to $100 for 14 ports including the mobo.
 
Last edited:
Yeah, what's the purpose of this server?

Keep in mind that Gb Ethernet is ~125MB/s max. That's about the speed of a single drive nowadays, so the speed benefits of RAID5ing multiple drives will be lost, and you gain all the problems and expense that go along w/ RAID.

+1 for unRAID! No need for expensive RAID cards; you can use any sized HDDs and add drives slowly over time as they become cheaper (this is also a plus for HDD reliability b/c if you buy all the drives at once you are more likely to have multiple simultaneous HDD failures); more fault tolerant than RAID5 (multiple simultaneous HDD failures do not wipe out all your data); more Green than any RAID since the drives can spin down when not in use; the list goes on and on...

http://www.lime-technology.com/home/87-for-system-builders


If you want more OS functionality try WHS, or maybe look into FlexRAID (I haven't checked this one out in detail yet, but it looks interesting).

For my unRAID server I went w/ the CoolerMaster 590:

http://www.coolermaster-usa.com/product.php?product_id=2709


What's great about this case is the entire front is (9) 5.25" bays. To install 3.5" devices you use adapters, but since this is my file server I use all the bays for HDD cages. 9 bays means I can fit (3) 4-in-3 HDD cages in there and each cage has a cool and quiet 120mm fan (most other solutions use much louder 80mm fans that don't cool nearly as well). I used some Yate Loons, and have them wired for 5v or 7v depending on the season. So, I can fit 12 HDDs in there, and since unRAID runs off a flash drive I don't have to worry about another HDD for the OS.

My mobo (MSI P43 Neo3-F purchased for $60 open-box on NewEgg) has 6 Intel sata ports, and 2 Jmicron sata ports. I added a PCIex1 2-port sata card ($15 on monoprice), and a PCI 4-port sata card ($30 used; only using 2 of these ports) to give me enough ports for 12 HDDs. Close to $100 for 14 ports including the mobo.

Hmm, I just picked freeNAS because it was suggested to me in another forum. So is unRAID free as well? Also would it be a good idea to turn off WD IDLE on the green drives I plan to use as well? I heard that was best for a raid system as some green drives will drop from the raid system. I sort of like the Raid cards even if it's just for the JBOD so I might still end up buying it for just that or if your right and I can just use those 4 sata port addition cards to add ports to this raid system I probably will. You say this can run off a flash drive as well? If so that's pretty sweet :) But I still might just grab a small 30GB SSD or something, I don't know, is a flash drive better for running an OS then a HD? Thanks again.
 
Yeah, what's the purpose of this server?

Keep in mind that Gb Ethernet is ~125MB/s max. That's about the speed of a single drive nowadays, so the speed benefits of RAID5ing multiple drives will be lost, and you gain all the problems and expense that go along w/ RAID.

+1 for unRAID! No need for expensive RAID cards; you can use any sized HDDs and add drives slowly over time as they become cheaper (this is also a plus for HDD reliability b/c if you buy all the drives at once you are more likely to have multiple simultaneous HDD failures); more fault tolerant than RAID5 (multiple simultaneous HDD failures do not wipe out all your data); more Green than any RAID since the drives can spin down when not in use; the list goes on and on...

http://www.lime-technology.com/home/87-for-system-builders


If you want more OS functionality try WHS, or maybe look into FlexRAID (I haven't checked this one out in detail yet, but it looks interesting).

For my unRAID server I went w/ the CoolerMaster 590:

http://www.coolermaster-usa.com/product.php?product_id=2709


What's great about this case is the entire front is (9) 5.25" bays. To install 3.5" devices you use adapters, but since this is my file server I use all the bays for HDD cages. 9 bays means I can fit (3) 4-in-3 HDD cages in there and each cage has a cool and quiet 120mm fan (most other solutions use much louder 80mm fans that don't cool nearly as well). I used some Yate Loons, and have them wired for 5v or 7v depending on the season. So, I can fit 12 HDDs in there, and since unRAID runs off a flash drive I don't have to worry about another HDD for the OS.

My mobo (MSI P43 Neo3-F purchased for $60 open-box on NewEgg) has 6 Intel sata ports, and 2 Jmicron sata ports. I added a PCIex1 2-port sata card ($15 on monoprice), and a PCI 4-port sata card ($30 used; only using 2 of these ports) to give me enough ports for 12 HDDs. Close to $100 for 14 ports including the mobo.

Also how do i find those bay adapters?
 
Hmm, I just picked freeNAS because it was suggested to me in another forum. So is unRAID free as well? Also would it be a good idea to turn off WD IDLE on the green drives I plan to use as well? I heard that was best for a raid system as some green drives will drop from the raid system. I sort of like the Raid cards even if it's just for the JBOD so I might still end up buying it for just that or if your right and I can just use those 4 sata port addition cards to add ports to this raid system I probably will. You say this can run off a flash drive as well? If so that's pretty sweet :) But I still might just grab a small 30GB SSD or something, I don't know, is a flash drive better for running an OS then a HD? Thanks again.

unRAID is free for up to 3 drives; a trial of sorts. Support for up to 6 drives is $69, and support for up to 21 drives (and this number will most likely continue to grow over time) is $119. There's also a $10 off coupon code till the end of this month. The unRAID software pays for itself in the money you save on hardware.

No need to turn off anything w/ unRAID; it works great w/ all kinds of green drives. It's not a hardware RAID so those issues don't affect it. unRAID gives you the option to have your HDDs spin-down after a period of non-use, or to keep them spinning 24/7.

In unRAID data is stored on individual drives, not striped like in RAID5. So, if you want to access a particular file, only 1 HDD needs to spin-up for you to access that file (if you have the HDDs set to spin-down). It's a much greener system w/ the spin-down feature, but it adds a ~10sec delay when you first access your data when you get home as you wait for the HDD to spin-up. I have mine setup to spin-down after 3hrs of non-use. Turn it off for instant access 24/7.

Data is stored on individual drives like JBOD, but there is also a dedicated parity drive. In this way you can lose 1 drive w/o losing any data. If you lose another drive before you can rebuild you only lose the data on that 1 drive (instead of the entire array like in RAID5). Typically, drives don't just go completely belly-up, so since the data is only stored on 1 HDD you can remove that 1 HDD put in another PC, and try to recover the files. In RAID5 this is impossible...once the array goes down you'll be spending thousands for data recovery services if you want your data back.

As far as I know unRAID will only run off a flash drive. There is probably a way to run it off a HDD or SSD, but you would have to be pretty versed in Linux to figure it out. The unRAID forums are a great wealth of info, and on-going support.

http://www.lime-technology.com/forum/

The unRAID OS will not be sped up by an SSD as I think the whole thing resides in RAM during use anyway.

You also don't even need a monitor, keyboard, or mouse to run unRAID, but I would use them at first to make sure it's booting up correctly, etc. Once it's setup all you need is power and LAN. Most of your access to the system will be via a web interface from another PC in your network. From there you can manage everything, and of course you can access your files via Windows Explorer as if the files were local.


Also how do i find those bay adapters?

The CM 590 comes w/ (1) 4-in-3 adapter. CM makes other 4-in-3 cages that will work the same, but they look a little different and are aluminum instead of steel:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817993002

I managed to buy some of the cages that come w/ the 590 off someone on the unRAID forums, so that I have a matching set of 3.
 
Last edited:
unRAID is free for up to 3 drives; a trial of sorts. Support for up to 6 drives is $69, and support for up to 21 drives (and this number will most likely continue to grow over time) is $119. There's also a $10 off coupon code till the end of this month. The unRAID software pays for itself in the money you save on hardware.

No need to turn off anything w/ unRAID; it works great w/ all kinds of green drives. It's not a hardware RAID so those issues don't affect it. unRAID gives you the option to have your HDDs spin-down after a period of non-use, or to keep them spinning 24/7.

In unRAID data is stored on individual drives, not striped like in RAID5. So, if you want to access a particular file, only 1 HDD needs to spin-up for you to access that file (if you have the HDDs set to spin-down). It's a much greener system w/ the spin-down feature, but it adds a ~10sec delay when you first access your data when you get home as you wait for the HDD to spin-up. I have mine setup to spin-down after 3hrs of non-use. Turn it off for instant access 24/7.

Data is stored on individual drives like JBOD, but there is also a dedicated parity drive. In this way you can lose 1 drive w/o losing any data. If you lose another drive before you can rebuild you only lose the data on that 1 drive (instead of the entire array like in RAID5). Typically, drives don't just go completely belly-up, so since the data is only stored on 1 HDD you can remove that 1 HDD put in another PC, and try to recover the files. In RAID5 this is impossible...once the array goes down you'll be spending thousands for data recovery services if you want your data back.

As far as I know unRAID will only run off a flash drive. There is probably a way to run it off a HDD or SSD, but you would have to be pretty versed in Linux to figure it out. The unRAID forums are a great wealth of info, and on-going support.

http://www.lime-technology.com/forum/

The unRAID OS will not be sped up by an SSD as I think the whole thing resides in RAM during use anyway.

You also don't even need a monitor, keyboard, or mouse to run unRAID, but I would use them at first to make sure it's booting up correctly, etc. Once it's setup all you need is power and LAN. Most of your access to the system will be via a web interface from another PC in your network. From there you can manage everything, and of course you can access your files via Windows Explorer as if the files were local.




The CM 590 comes w/ (1) 4-in-3 adapter. CM makes other 4-in-3 cages that will work the same, but they look a little different and are aluminum instead of steel:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817993002

I managed to buy some of the cages that come w/ the 590 off someone on the unRAID forums, so that I have a matching set of 3.

Alright, I just got one more question. The parity drive is like the main back up drive right? Sort of like usenet with .par files? Basically it will rebuild the files that were on the failed drive right? I can have also say 1 parity drive work for say 20 other hard drives as well right? (As long as it's larger or the same size as the other drives I read on the unraid site, I plan on doing all 2TB drives). Let me know if I'm wrong or right here thanks!
 
Yes.

Let's say you have 3 data drives and a parity drive. And let's say the first bit on each of the data drives is as so:

1
0
1

When you add those up you get an even number, and thus the parity bit for the 1st column of bits will be 0.

If you had:

1
0
0

you'd have an odd number when added up, and the parity bit would be 1.

Larger Example:
Data HDD-1 1111010101000101
Data HDD-2 0101001010100101
Data HDD-3 1010101010000010
-----------------------------------------
Parity--HDD 0000110101100010

So, if you lose the parity drive or any other drive you can rebuild by knowing that the sum of each column of bits will always be even.

B/c of the way parity works you can have any number of drives for data (but unRAID has a current limit of 20), and 1 drive for parity.
 
another option for cases is to get a rackmount case. Norco makes the cheapest of these types of cases with the 450 and 470 holding 10 by default, with fans in front of the drives, and is expandable to 15 drives like I did for my file server which currently has 13 data drives. They also make monster cases that fit 20 drives and feature hotswapability but those cases cost quite a bit.

I agree with jason4207's recommendation of coolermaster cases as well. Basically there are two basic ways you can go case wise conventional or rackmount both have advantages as well as disadvantages.
 
Yes.

Let's say you have 3 data drives and a parity drive. And let's say the first bit on each of the data drives is as so:

1
0
1

When you add those up you get an even number, and thus the parity bit for the 1st column of bits will be 0.

If you had:

1
0
0

you'd have an odd number when added up, and the parity bit would be 1.

Larger Example:
Data HDD-1 1111010101000101
Data HDD-2 0101001010100101
Data HDD-3 1010101010000010
-----------------------------------------
Parity--HDD 0000110101100010

So, if you lose the parity drive or any other drive you can rebuild by knowing that the sum of each column of bits will always be even.

B/c of the way parity works you can have any number of drives for data (but unRAID has a current limit of 20), and 1 drive for parity.
Ok it's slowly starting to make sense, basically even if the parity drive fails there will be enough parity on the data drives to recover if I'm reading correctly? I've also thought about using a antec twelve hundred with four of these,
http://www.newegg.ca/Product/Product.aspx?Item=N82E16817994077&cm_re=5_in_3-_-17-994-077-_-Product

Would this work? In the antec twelve hundred it is all 5.25" base right?
 
14 drives is too much for RAID 5. I would go RAID 6 and a hot spare. Also since you are dealing with a big project here I would suggest using Linux software RAID. Even if you have no idea what you are doing in Linux it really isn't hard. PM me if you want any instructions or help. But this will give you great control, monitoring, flexibility, and low hardware requirements. Also if you are looking for good throughput to the server you may want to get multiple Gigabit Ethernet cards and bridge them together, but this can get pricey, then again you are already buying 14 drives...

RAID 6 uses two parity schemes so you can lose two drives at once and not lose any data. At the same time you will lose the capacity of two drives. Also the hot spare is an idle extra drive (15th drive) that only gets used when a drive dies. Once a drive drops from the array the hot spare immediately has the calculated parity data written to it. Then if by chance two more drives die, you still won't lose data, unless the other two die before the spare is finished being written to.

Also I just can't stress enough that 14 drives is a lot of data and potential for failures. If I were you I would build two of these systems and a small server. The server can actually RAID 1 (essentially RAID 6 + 1) the two boxes. If something goes horribly wrong on one, then you still have an active copy in the other. That might be a bit over kill though and would require a good investment in additional network hardware. But I definitely would suggest some sort of backup plan. I work with enterprise storage with systems ranging from 120 to 2400 drives. I know what can and will go wrong.
 
Ok it's slowly starting to make sense, basically even if the parity drive fails there will be enough parity on the data drives to recover if I'm reading correctly? I've also thought about using a antec twelve hundred with four of these,
http://www.newegg.ca/Product/Product.aspx?Item=N82E16817994077&cm_re=5_in_3-_-17-994-077-_-Product

Would this work? In the antec twelve hundred it is all 5.25" base right?

All parity data is on 1 drive.

So, if the parity drive fails then all your data is still there. None of it's missing. Just replace the parity drive, and it'll rebuild parity from scratch to the new drive.

The 1200 should work, and a lot of guys use it to go big.

Those 5-in-3 cages should work as well, but you might have to bend some tabs on the inside of the case for them to fit. The case will have little shelves on each side of each bay, and you only want those on every third bay to support the cages. Notice how the cages are cubical w/o any slots or areas for those little tabs to slide past.

The fans on those cages might not be that quiet, but it will be easy to swap out and add drives as needed. Some people keep their servers in a closet or out of sight/earshot, so they can be a little louder. Mine is in my office, so I like it whisper quiet.


14 drives is too much for RAID 5. I would go RAID 6 and a hot spare. Also since you are dealing with a big project here I would suggest using Linux software RAID. Even if you have no idea what you are doing in Linux it really isn't hard. PM me if you want any instructions or help. But this will give you great control, monitoring, flexibility, and low hardware requirements. Also if you are looking for good throughput to the server you may want to get multiple Gigabit Ethernet cards and bridge them together, but this can get pricey, then again you are already buying 14 drives...

RAID 6 uses two parity schemes so you can lose two drives at once and not lose any data. At the same time you will lose the capacity of two drives. Also the hot spare is an idle extra drive (15th drive) that only gets used when a drive dies. Once a drive drops from the array the hot spare immediately has the calculated parity data written to it. Then if by chance two more drives die, you still won't lose data, unless the other two die before the spare is finished being written to.

Also I just can't stress enough that 14 drives is a lot of data and potential for failures. If I were you I would build two of these systems and a small server. The server can actually RAID 1 (essentially RAID 6 + 1) the two boxes. If something goes horribly wrong on one, then you still have an active copy in the other. That might be a bit over kill though and would require a good investment in additional network hardware. But I definitely would suggest some sort of backup plan. I work with enterprise storage with systems ranging from 120 to 2400 drives. I know what can and will go wrong.

He's looking at unRAID now which isn't RAID at all. It's just JBOD w/ parity protection. Much better for media serving around the house. RAID5/6 is for enterprise imo.
 
It's just JBOD w/ parity protection. Much better for media serving around the house.

I just skimmed over the unRAID wiki. They claim it is similar to RAID4 but without striping. RAID 4 uses a single parity disk for all the parity which can cause a bottleneck if too many drives are being written to at the same time since they all need to access the one parity drive at once. In RAID 5/6 parity is distributed across all drives so there is no bottleneck, this is why RAID 4 has dropped out of existence. But this doesn't sound like much of a problem if he is only streaming a few movies.

I did notice that if two drives fail at once then only the data on the second failed drive will be lost. This is better than losing everything, but data is still lost. I can't find any information on unRAID supporting spare drives. This would save your data, even if two drives are going to die it is very very unlikely to occur at the same time or while the spare drive is replacing the first failed drive. Then again with a RAID 6 setup if you were very unlucky and lost three or more drives before the spare could rebuild then you will lose everything. But this is why RAID is not a backup solution.

RAID5/6 is for enterprise imo.

Most enterprise customers use RAID 1 or RAID 10 because they can afford to. RAID 5/6 is normally for customers with a smaller budget that are looking for a better price per storage ratio.

The OP can use what he wants, I'm just trying to help show the pros and cons of each method. And save him $120 on unRAID :p
 
I just skimmed over the unRAID wiki. They claim it is similar to RAID4 but without striping. RAID 4 uses a single parity disk for all the parity which can cause a bottleneck if too many drives are being written to at the same time since they all need to access the one parity drive at once. In RAID 5/6 parity is distributed across all drives so there is no bottleneck, this is why RAID 4 has dropped out of existence. But this doesn't sound like much of a problem if he is only streaming a few movies.

I did notice that if two drives fail at once then only the data on the second failed drive will be lost. This is better than losing everything, but data is still lost. I can't find any information on unRAID supporting spare drives. This would save your data, even if two drives are going to die it is very very unlikely to occur at the same time or while the spare drive is replacing the first failed drive. Then again with a RAID 6 setup if you were very unlucky and lost three or more drives before the spare could rebuild then you will lose everything. But this is why RAID is not a backup solution.



Most enterprise customers use RAID 1 or RAID 10 because they can afford to. RAID 5/6 is normally for customers with a smaller budget that are looking for a better price per storage ratio.

The OP can use what he wants, I'm just trying to help show the pros and cons of each method. And save him $120 on unRAID :p

unRAID makes use of a cache disk (optional) to speed up writes. So how ever fast the cache disk is how fast the writes happen. Then the info is written to the parity protected 'array' overnight or how ever you want to set it up; daily, weekly, etc. So, the user doesn't feel the hit. Reads and writes for the end user are at the speed of a single drive which can basically saturate a good Gb Ethernet connection.

When writes do go to the parity protected 'array' during the overnight event only 1 data drive and the parity drive are written to at any given time. Files are stored on individual drives; not striped.

In this way you can pull a drive at any time to get info from it from another PC outside of the array. And so if you do have the 2-or-more-drives-die-at-once problem you can usually still get some info off the drives as they are not dependent on the array for access. Drives don't usually go completely belly up; they tend to die slow. But there is always a potential for catastrophe. But still, only losing 2TB vs losing 20TB has merit.

You can do twin unRAID systems at separate locations for the best protection.

unRAID is $120, but the savings in hardware far outweighs that. I've got 14 ports at a cost of $105 including the mobo. You can use cheap sata cards to add ports. No need for expensive RAID cards that cost hundreds by themselves. :p

Plus, it's free to try up to 3 drives. That's what I would do. And if I could do it over I'd also try out FlexRAID which is free. Definitely keep the options open, but RAID5/6 is going to cost a lot more up front and over time, and imo is not as fault tolerant and definitely not as flexible w/ different kinds of drives and drive sizes.


How does enterprise use RAID1 w/ so many drives? 2 drives in RAID1 makes sense. But more than that doesn't? Am I thinking about it wrong? Is it a bunch of 2 drive RAID1s in a JBOD arrangement?

And for RAID10 would they have 2 large RAID0 arrays in a RAID1 mirror? Seems fast, but somewhat volatile w/ a lot of drives. Or maybe a bunch of 2-drive RAID1's would be better since if you had a 2nd drive failure while 1 drive was down it would be unlikely to be in the same RAID1 array as the 1st failure.

I always imagined Enterprise using 2 large RAID6 arrays in a RAID1 mirror. Seems like the best protection for the money in my mind, but I'm not used to dealing w/ arrays on that scale.
 
unRAID makes use of a cache disk (optional) to speed up writes. So how ever fast the cache disk is how fast the writes happen. Then the info is written to the parity protected 'array' overnight or how ever you want to set it up; daily, weekly, etc. So, the user doesn't feel the hit. Reads and writes for the end user are at the speed of a single drive which can basically saturate a good Gb Ethernet connection.

When writes do go to the parity protected 'array' during the overnight event only 1 data drive and the parity drive are written to at any given time. Files are stored on individual drives; not striped.

In this way you can pull a drive at any time to get info from it from another PC outside of the array. And so if you do have the 2-or-more-drives-die-at-once problem you can usually still get some info off the drives as they are not dependent on the array for access. Drives don't usually go completely belly up; they tend to die slow. But there is always a potential for catastrophe. But still, only losing 2TB vs losing 20TB has merit.

You can do twin unRAID systems at separate locations for the best protection.

unRAID is $120, but the savings in hardware far outweighs that. I've got 14 ports at a cost of $105 including the mobo. You can use cheap sata cards to add ports. No need for expensive RAID cards that cost hundreds by themselves. :p

Plus, it's free to try up to 3 drives. That's what I would do. And if I could do it over I'd also try out FlexRAID which is free. Definitely keep the options open, but RAID5/6 is going to cost a lot more up front and over time, and imo is not as fault tolerant and definitely not as flexible w/ different kinds of drives and drive sizes.


How does enterprise use RAID1 w/ so many drives? 2 drives in RAID1 makes sense. But more than that doesn't? Am I thinking about it wrong? Is it a bunch of 2 drive RAID1s in a JBOD arrangement?

And for RAID10 would they have 2 large RAID0 arrays in a RAID1 mirror? Seems fast, but somewhat volatile w/ a lot of drives. Or maybe a bunch of 2-drive RAID1's would be better since if you had a 2nd drive failure while 1 drive was down it would be unlikely to be in the same RAID1 array as the 1st failure.

I always imagined Enterprise using 2 large RAID6 arrays in a RAID1 mirror. Seems like the best protection for the money in my mind, but I'm not used to dealing w/ arrays on that scale.
Awesome sounds good, so now to get a massive amount of sata ports I should just use a few PCI cards to add them? the biggest non raid sata port addon card has 4 slots that I've found and is here,
http://www.newegg.ca/Product/Produc...03&cm_re=pci_sata_port-_-15-280-003-_-Product

Also thanks for the info Zerix but I think I might just go with unraid for this.

Another quick question, when a drive fails will unraid tell me what drive (by it's serial or something else that the failed drive only has) died? So I'm not going through all these drives trying to find out what one failed. I would sort of like to write the serial # on a white piece of tape and place it on the outside of the hard drive cage.
 
Zerox01 does have a valid point as software raid 6 is better at data protection then unraid. However unraid offers drive spin down which is a huge plus. If you don't plan on spinning down drives then linux raid 6 is a better IMHO.

As for controllers if you plan on putting a lot of drives in one system 8 port dumb raid cards are the way to go even though they cost more per port then 4 port cards as add in slots become a limiting factor and I like keeping drives off of onboard controllers. Here are two good cards for that

http://www.newegg.com/Product/Produ...me=Controllers / RAID Cards&SpeTabStoreType=0

the pci-x card works in normal pci slots as well.
 
unRAID is $120, but the savings in hardware far outweighs that. I've got 14 ports at a cost of $105 including the mobo. You can use cheap sata cards to add ports. No need for expensive RAID cards that cost hundreds by themselves.

Plus, it's free to try up to 3 drives. That's what I would do. And if I could do it over I'd also try out FlexRAID which is free. Definitely keep the options open, but RAID5/6 is going to cost a lot more up front and over time, and imo is not as fault tolerant and definitely not as flexible w/ different kinds of drives and drive sizes.

Linux RAID would require no expensive RAID cards, so there are no more hardware expenses than unRAID.


How does enterprise use RAID1 w/ so many drives? 2 drives in RAID1 makes sense. But more than that doesn't? Am I thinking about it wrong? Is it a bunch of 2 drive RAID1s in a JBOD arrangement?

And for RAID10 would they have 2 large RAID0 arrays in a RAID1 mirror? Seems fast, but somewhat volatile w/ a lot of drives. Or maybe a bunch of 2-drive RAID1's would be better since if you had a 2nd drive failure while 1 drive was down it would be unlikely to be in the same RAID1 array as the 1st failure.

I always imagined Enterprise using 2 large RAID6 arrays in a RAID1 mirror. Seems like the best protection for the money in my mind, but I'm not used to dealing w/ arrays on that scale.

In the product I work with, the most common setup is two drives in RAID 1. But since the system has hundreds of drives the system has a build in Volume manager. So what ever the host needs for storage, a volume or volumes can be created at that size which run across dozens and dozens of RAID 1 arrays at the same time essentially making a giant RAID 0 array out of the RAID 1's. They can also be shown as the individual arrays and have the volumes setup on the host level. RAID 5/6 are setup the same way only the initial arrays can be 3, 7, or 14 drives, then the volumes are spread over them the same way as RAID 1. These are then connected to the host through multiple and redundant 8Gbit Fibre channel connections.

Also thanks for the info Zerix but I think I might just go with unraid for this.

No problem, I'll still be willing to help you out if you change your mind.
 
Use PCIeX1 cards if your mobo has the slots for it. They only come in 2-port, though.

http://www.monoprice.com/products/p...=10407&cs_id=1040702&p_id=2530&seq=1&format=2

unRAID works very well w/ onboard ports, so try to get a mobo w/ at least 8. If your mobo ever dies you can swap in a different board w/o any issues.

I'm only running 2 HDDs on the PCI bus, but you could run more. As long as you're not accessing more than 1 or 2 at a time it won't bottleneck too bad.

It's best to stay on the PCIe bus or on-board if possible.

I haven't looked into the PCIeX2/X4/X8 cards. But there is a wealth of info on the unRAID forums if you want to find the most economical and durable product.

It just depends on how many drives you plan to have, and mobo purchase determines a lot.
 
A friend of mine is saying unraid is a terrible way to go cause when you loose a drive there's a huge chance of me loosing all my data, is there right? He mentioned to use ZFS instead with raidz and possibly do a raidz2 set up, what do you guys think about this?
 
Well ZFS will still require you to use Linux (or Solaris / Unix / BSD). I have no experience with ZFS but from what I have read it still has bugs to work out. But given its strong backing by Sun and now Oracle I doubt those issues are anything for you worry about with your usage. Still if you were to go this route IMO it would just be easier to use mdraid (Linux soft RAID).

ZFS does have the advantage of using checksums for all data written and verifies if anything has been silently corrupted, then will restore a good copy from a mirror, parity, or snapshot. This functionality is also being added to btrfs which will (hopefully soon) be the next default Linux file system, replacing ext4. Btrfs is also being partly developed by Oracle and will have all of the same features as ZFS. This also brings in to question if Oracle will phase out ZFS at some point after, it was baggage IP acquired from Sun after all. There is also nothing stopping you from running ZFS on top of an mdraid array, and only using it for checksumming.

My point is ZFS's future is uncertain, and the tools to do what you want may be complicated for a Linux noob. Mdraid is very easy to setup and monitor.

EDIT:
lol I take back the complicated part.
http://systembash.com/content/howto-installing-zfs-and-setting-up-a-raid-z-array-on-ubuntu/
This guide looks very similar to how mdraid is setup. But they do mention the "array" was only able to get 20MB/s throughput but this could be caused by the checksumming.
 
Back