• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Some Raid Questions

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

TheGreySpectre

Member
Joined
Sep 6, 2003
I have a couple questions about adding additional hard drives and raid setups that i was hoping someone could answer.

First some background:
I have use up all the sata ports on my motherboard, but most of my hard drives are full So I am looking into my options. I don't really need the data redundancy of of a raid setup, but I do need the additional sata ports so I can plug more in. My understanding is to obtain those additional sata ports I will have to buy a raid card, however if I am buying a raid card, I figure I might as well setup a raid because redundancy is never a bad thing.

Here are my questions:
1. Do you have to have entirely new empty drives to create a raid or could I add my current drives to that raid without having to wipe them first. (3x 1.5tb drives, all with 150+TB free)

2.Once an array is created can I add drives to it to make it bigger? If so do all the drives have to be the same size?

3. Does anyone have any recommendations for a card? How much do I need to spend? They seem to vary widely in price. From what I have read I think raid 6 would be well suited to my needs. I was currently looking at this card. In the near future I am looking to expanding to at least 30TB of space if not more.

4. do I have to reinstall windows to use a raid card. (Windows 7 professional 64bit. Windows drive will be separate from raid and ran off one a mobo sata port)

Thanks,
Greyspectre
 
1 & 2 depend on what raid card you are using. It has to support OCE (online capacity expansion) or simply say it supports raid expansion in order to add drives without having to wipe them first. If it doesn't support that you have to copy all the data off somewhere else, recreate the raid with the new drives then copy it all back.

3. That is a decent card, though personally if I was spending that kind of money on a RAID card I would spend another 100bucks for an Areca card like this one. Not to mention the areca has 8 ports instead of 4 so even more room for growth. Especially if you are wanting to try and get that much storage (30+ terabytes?). Just dont buy Seagate 7200.11 1.5TB drives.


4. Only if windows is installed on the raid array and it doesn't support capacity expansion. so in your case no since the OS isn't on the RAID.
 
How do you plug more drives into the card and how to you tell what the maximum number of physical drives you can plug in is? I see one card supports 512Tb, but I don't see connectors or plugs for fanout cables to support that many drives.
 
The only real rule of thumb for that is:
1) how many of what type of connector does the card have
2) what capacity drives does the card support

1 x 2 = total storage on one card

Some cards support port expanders (which are expensive) which allows for even more storage on one drive.

However you really dont want to put too many drives onto one card, or if you do don't put them all into the same array.
 
I did some further research into the manufacturer websites and I suppose newegg lists them indirectly in the fact that you can plug in 4 drives for each SAS connector.

I changed which card I was looking at to the Adaptec 52445

(Newegg link)) which allows for 28 drives to be connected without expanders.

I think my main remaining question is on the ability to expand an array. I have heard that some cards allow you to do this however I am unsure what to check to see if a card supports this. By expanding an array I mean that for example if I have a 10drive array and I buy another 4 drives I could expand the array to the new 14 drive size without completely deleting everything on the array.

I would probably start with a 4 or 6 raid 6 array of 1.5TB drives and very shortly expand it to 8/10 drives. My current case will only only support 12 drives without adapters, but in the future I would like to move the card to dedicated file serving computer in a rackmount case.
 
As I mentioned above you do not want to run that many drives in a single array, you are asking for issues especially if you are not using enterprise class drives. 8 drives in one array is where you should start to draw the line, max at 12. so there isn't much reason to buy a massive card like that IMO.

As I also mentioned earlier the ability to expand an array without removing data is called 'OCE' which stands for online capacity expansion. This means you add another drive, use the management software to add it to the array, then you wait...and wait...and wait and eventually it will be part of the RAID array.

OCE times for a medium sized array (like 6 1.5TB drives) is a couple days on average. And this must be done 1 drive at a time.

As for cases, the norco 4020 is an excellent choice for file servers. it can hold 20 drives and hot swap capable cages. Its pretty much the computer nerds server case of choice when needing lots of hard drives.
 
sorry, you first mention of OCE went in 1 ear and out the other for some reason >.>

with OCE, is your entire raid inaccessible while you add a drive, or can you still access stuff on the starting array size?

Why do you not want to go greater than 8 drives on one card? It seems like the chance of having more than 2 drives error or fail at the same time would be pretty low even with as many as 20 drives. I suppose I could see a limit of 8 on raid 5, but that seems kind of low for raid 6, with raid 6 having tolerence for 2 drive failures. Not that I am by any means an expert, I was just thinking of the drives I have had fail on me. I am not trying to discredit you or anything, I am just very curious.

sorry if my questions seem a little dumb, it is only recently that I have encountered a need to do anything past plugging hard drives into motherboard so I am rather inexperianced with the whole raid thing.
 
Last edited:
OCE (Online Capacity Expansion) should allow access to your data while the expansion is in progress, although I generally recommend AGAINST using OCE, since your LIVE data is being restripped on the fly. Depending on how the RAID stack is written, a power interruption or drive failure during process of critical section can leave ALL of your data inaccessible PERMANENTLY. Considering how "cheap" drives are these days, it is far safer to buy a new set of drives and clone the data over instead.
 
Also, what are my options if I just wanted to plug in a bunch of disks indepentaly without raid, but did not have enough sata ports?

How many independent drives does windows even support?
 
What about software raid? I won't be doing any high io applications and pretty soon I will be moving it to a separate box dedicated to seperate box so the cpu cycles arnt a concern.

If I did this would I be able to get a cheap crappy raid card like This one just to get more sata ports then have all the raid stuff controlled by a processor?

I do recognize that is a terrible raid controller, but would it suit my purposes if I wanted to do software raid?
 
OCE (Online Capacity Expansion) should allow access to your data while the expansion is in progress, although I generally recommend AGAINST using OCE, since your LIVE data is being restripped on the fly. Depending on how the RAID stack is written, a power interruption or drive failure during process of critical section can leave ALL of your data inaccessible PERMANENTLY. Considering how "cheap" drives are these days, it is far safer to buy a new set of drives and clone the data over instead.

That actually depends on the card. There is a thread on another forum dedicated to media center servers where people are running raid controllers left and right. Someone running one of the areca cards had a power failure in the middle of an OCE. The server rebooted and then picked right back up where it left off, no loss of the raid or the data. Of course as has been said many times all over, RAID is not a replacement for backups.

Also the reason for not wanting a lot of drives in one array is due to the error checking limits of commercial drives that can cause errors during array expansions or repairs (like for a failed drive) too many such errors and the who9le array could die. So having lots of drives with poor error limits greatly increases the chance for a total failure during a rebuild. Thus you don't want to use just one massive RAID card and a ton of drives, not to mention it costs a heck of a lot more.

What about software raid? I won't be doing any high io applications and pretty soon I will be moving it to a separate box dedicated to seperate box so the cpu cycles arnt a concern.

If I did this would I be able to get a cheap crappy raid card like This one just to get more sata ports then have all the raid stuff controlled by a processor?

I do recognize that is a terrible raid controller, but would it suit my purposes if I wanted to do software raid?

Well that is overly expensive for just 4 ports if you are not wanting a raid controller (which that sort of is)

This card is another highly used and recommended card
http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009
You just need to have a board that it fits in. It can still use a standard PCI slot if there is enough clearance for the card itself to fit and for your application it sounds like you wont need the max bandwidth a PCI-X slot would offer (and cost)
 
Back