• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

ATA RAID Advice

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Restorer

Member
Joined
Aug 11, 2003
Location
Los Angeles, CA
Right now I have a SCSI RAID setup in my server, but for various reasons, one of which is simplicity, I want to replace it with an ATA RAID array. One of the constraints is that I want to do it with zero monetary loss - I want to be able to build the ATA array for the amount I can sell the SCSI drives and cards. So here is what I am thinking about for my server:

1. SATA or PATA?
The price of SATA drives is getting closer to the price for PATA drives. SATA doesn't have any raw speed advantage over PATA, and even so I don't need incredible speed from each drive. PATA takes more CPU power than SATA, but it's a server for only a small group, and CPU usage will not really be a problem. The cheapest SATA cards have only two connectors, so only two drives per card; cheap PATA cards have two channels which support two drives each, so I can support two drives at maximum speed, or four at a cost to speed. I'll be running Linux, so to do SATA I'll have to upgrade to kernel 2.6; then again, I should probably do that anyway. I think I will be going with PATA.

2. RAID 5, or RAID 0 and a backup drive?
I really want to do RAID 5, but software RAID 5 is slower than I would like, and a card that will do hardware RAID 5 is far out of my budget. Software RAID 0 is decently fast, and hardware RAID 0 is standard on any RAID card. So it looks like I will be running my main disks in RAID 0, with one or two giant disks for automated backup (but not RAID 1).
3. Hardware or software RAID?
Support for a lot of the ATA RAID chipsets is lacking in Linux. True hardware RAID PATA cards are more expensive than standard two-channel cards. Linux software RAID is fast enough for what I need, but it uses a huge portion of my CPU power: creating a file full of zeroes on my current software RAID 5 array uses 20% CPU to create the file, and up to 30% to write it to the RAID. Advice?

4. Drive configuration
I currently have 154 GB of usable space in my (too complicated) multi-array SCSI setup, but all disks together total 222 GB. I want to at least match this in my new main array. I'm considering two 120 GB's or three 80 GB's. My backup drive will likely be a single 250 GB drive. What do you think? Three 80's will be more expensive than two 120's, and will increase the chance that one will go bad, but there should be a decent speed increase, right?

5. Price
I'm really hoping to get more than a couple hundred for my entire RAID setup, but I don't know if that will happen. I could get as little as $150, or as much as $350 if I get lucky. I'll be posting on various classifieds and such first, and if I don't get the offers I want it will be off to eBay. Also, since I have all my data on these SCSI drives, and not enough free space anywhere else to back it up, I'll have to put in an initial investment and buy my giant backup disk first to move my stuff onto.

I need some advice before I put my plan into action. Also, this (very long, sorry) post helped me get my plans in order so I know what I'm thinking now. So, what do you all think of this?
 
First a question, Why consider RAID-0 for a server? Unless you are running several Gbit connections, streaming audio/video, and/or peforming many other IO intensive tasks, RAID-1 will be fast enough to handle requests for data. Something along the lines of a Promise Fasttrak TX2000 will give you redundancy and should have sufficient speed for your uses. It has support for a couple of nix as well, though the selection is limited.It would also eliminate the need for a separate backup drive(though I'd recommend it in any event). Paired with a couple of 200GB disks, it would give large quantities of available storage.

1) Either will be fine. Since it's a server, it will likely be used until end of life. I wouldn't worry as much for future compatibility. Since the array will be on a controller, I wouldn't be worried about portability.

2) See above. RAID-5 will typically give abysmal write speed and I'm not a great fan of RAID-0, especially for servers.

3) True hardware RAID would be preferred, but 3ware Escalades and similar cards get expensive fast. Driver based hardware inplementations would be next with software RAID being a last resort.

4) See above. It's simple, easy to recover from, cost effective and will probably satisfy any needs for IO on a server. Three drives in RAID-0 would require a 4 channel controller, unless you want to lose significant performance due to congestion in the IDE channel needing to read or write data to both drives on the channel simultaneously.

5) Price will be lower this way than more complex arrangements.
 
Back