• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Looking for a real RAID card.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

nstgc

Registered
Joined
Dec 21, 2013
For starters, I don't need a RAID card, I mostly want a RAID card. I imagine on these forums there are plenty of people who understand this.

That said, I am looking at keeping this under $500. Currently I am mostly looking at the ARC2124-8i which is available at Newegg for ~$480.

In addition to price, my other criterion are that it needs to have really good Linux support, that it supports SATA (I have WD RE4's), and supports RAId 6. In particular I use AntergOS which is an off-shoot of Arch, so simply having closed source for RHEL, SuSE, and Fedora won't do.

Some additional things I'm looking for would be

  • ECC memory
  • SSD caching
  • support for 8 drives without an expander (this is actually almost a requirement)
  • Doens't need a fan
  • If it does need a fan, it has one
  • Write-caching can be turned on without a battery back up (I have a UPS).
  • The more RAM the better.

[edit] Oh, one more BIG requirement -- it needs to be compatible with a UEFI main board. Its going into the system in my sig (not a NAS, or media server).
 
Last edited:
What is your goal for the system? What type of capacities do you need? I'm just trying to understand because there are probably much cheaper ways of accomplishing your goal.

RAID 6 is likely overkill. RAID 1 can be run without performance issues on standard hardware.

Do you need high speed or large capacity? Expandability?

I just think other options will suite you better, i.e. either going straight up dedicated NAS system or going with large HDD's and RAID 1 and 4TB drives....or even RAID 10/01 can be handled without much issue on your motherboard.
 
That sounds like a lot of fun. I was looking to do a project like that a year or two ago, but I changed my mind. Instead of going for hardware RAID, I decided that software RAID was the winner for me. I now use both mdraid (standard Linux software RAID) -- I use this for a RAID1 of two OS disks, and ZFS RaidZ -- I use this as my main storage array.

I can tell you, I am glad I went this way. Much, much cheaper, and more flexability. Nowadays the only big advantage that you get with hardware RAID is slightly better detection of hardware failures. Software RAID is limited to that the OS can detect, so there is a slightly higher risk of a disk that has failed, but has not been detected by the OS as failed causing problems. The rest is all very similar, just using host CPU and memory resources, which are plentiful nowadays.

I actually had a SSD installed as a read cache disk (L2ARC) in my ZFS, but found that it didn't actually make any difference... my server has 32GB of RAM, of which, about 23GB is disk cache right now, so adding another 120GB of second-layer cache on top of that really didn't make an impact. I played around with a SSD write cache (ZIL) on my ZFS, though that made a measurable impact for write speeds, the actual difference was a few MB/sec, because my disks could keep up fairly well with gigabit speeds already.

One thing I wanted (and still really want to do) is to throw money into hotswap bays. When I was looking before, I really liked Addonics gear. They basically take a ground-up approach to NAS devices... you can buy the parts individually or a fully-assembled NAS. The parts include a large variety of IO cards, and hotswap bays (like this one: http://addonics.com/products/aesn5da35-a.php, which fits in your computer). That opens up quite a few options, either store & power disks locally in your computer, or get a storage tower, fill it with disks, and connect it back to your computer using eSATA or iSCSI (more of a NAS approach).
 
Sorry for the late reply. This has been quite the week.

Before I go on, I haven't explored NAS as an alternative to keeping drives inside my computer, nor am I opposed to it, although I still favor the idea of an overkill add-in card.

What is your goal for the system? What type of capacities do you need? I'm just trying to understand because there are probably much cheaper ways of accomplishing your goal.

RAID 6 is likely overkill. RAID 1 can be run without performance issues on standard hardware.

Do you need high speed or large capacity? Expandability?

I just think other options will suite you better, i.e. either going straight up dedicated NAS system or going with large HDD's and RAID 1 and 4TB drives....or even RAID 10/01 can be handled without much issue on your motherboard.

The goal of my system is to be my multi-function computing system that I use for everything. Most of what I do is coding, compiling (mostly as a requirement for my OS), typing stuff up in LaTeX, gaming when I can, and watching movies.

As I said at the beginning of my post, and as you can see from what I use my computer for, I don't need a dedicated RAID card. My money would be better spend on a SSD for caching and then use RAID 5 via some sort of fake RAID. I wouldn't use Intel's (via my main board) as my system tends to hang randomly every now and then when it is in use (regardless of OS in use), but full-on hardware RAID is overkill, regardless of the level.

Everything will be run off the RAID.

That said I am looking for a combination of six things: Speed, redundancy, and scalablity, flexibility, capacity and mobility. Between the first two, RAID 10 or 5 would be fine. scalability is easy to accommodate so long as what ever I'm plugging these drives into has lots of ports, and allows for RAID expansion. Most add-in card, as well as Linux software RAID (don't know, or care about, Windows option), offer this. For flexibility I mean that I can have multiple, different arrays on the drives. My plan is to have RAID 6 for long term storage that isn't accessed frequently, RAID 5 for programs, my download folder, as well as other temporary folder, and then 0 for SWAP and a paging file (I dual-boot). Those add-in cards I've seen, as well as the one I've used in the past offered this feature, and I assume that Linux software raid offers this. As for capacity, this sort of rules out RAID 1 (and therefore 10) as it has 50% efficiency. My problem is that I'm a pack rat and keep everything on my system. I also have off-line back-ups, of course, however I also like to keep things on hand. as for mobility, I like the idea of being able to take my storage system and move it elsewhere if needed. I imagine that NAS is King in this category, but I know very little about running an OS, and programs from a NAS.

As for why I'm hung up on RAID 6 over RAID 5, it has to do with the fact that RAID 5 only offers a single symbol (block) of redundancy per data set (stripe). My issue is during reconstruction, what happens in the event of an unrecoverable read error. This is unlikely on my system since for the foreseeable future I'll be using drives with a 1 in 10^-15 URE rate, and I just don't have that much data at the moment, but the chances are non-zero. However I will be increasing this. Furthermore the idea of a case where there is a an unidentified write error troubles me. In this case its not possible to decide which symbol is correct -- the redundant symbol or the original. This is not to mention the case of an incorrectly computed XOR symbol, though unlikely. In general the chances of a write hole are greater with RAID 5 and 1 due to only have one level of redundancy.

I don't know if that answers all of your questions or not.

There is also the fact that I simply want a dedicated card or subsystem independent of the mainboard/CPU.




That sounds like a lot of fun. I was looking to do a project like that a year or two ago, but I changed my mind. Instead of going for hardware RAID, I decided that software RAID was the winner for me. I now use both mdraid (standard Linux software RAID) -- I use this for a RAID1 of two OS disks, and ZFS RaidZ -- I use this as my main storage array.

I can tell you, I am glad I went this way. Much, much cheaper, and more flexability. Nowadays the only big advantage that you get with hardware RAID is slightly better detection of hardware failures. Software RAID is limited to that the OS can detect, so there is a slightly higher risk of a disk that has failed, but has not been detected by the OS as failed causing problems. The rest is all very similar, just using host CPU and memory resources, which are plentiful nowadays.

I actually had a SSD installed as a read cache disk (L2ARC) in my ZFS, but found that it didn't actually make any difference... my server has 32GB of RAM, of which, about 23GB is disk cache right now, so adding another 120GB of second-layer cache on top of that really didn't make an impact. I played around with a SSD write cache (ZIL) on my ZFS, though that made a measurable impact for write speeds, the actual difference was a few MB/sec, because my disks could keep up fairly well with gigabit speeds already.

One thing I wanted (and still really want to do) is to throw money into hotswap bays. When I was looking before, I really liked Addonics gear. They basically take a ground-up approach to NAS devices... you can buy the parts individually or a fully-assembled NAS. The parts include a large variety of IO cards, and hotswap bays (like this one: http://addonics.com/products/aesn5da35-a.php, which fits in your computer). That opens up quite a few options, either store & power disks locally in your computer, or get a storage tower, fill it with disks, and connect it back to your computer using eSATA or iSCSI (more of a NAS approach).

My problem with pure software RAID is that I dual-boot. This, of course, can be worked around by using a JOBD card, then making two seperate arrays -- one for Windows and one for Linux -- then using each OS's software RAID method. Still, I like being able to test for bad blocks, which all the the mid range add-in cards can do (or so the sales reps tell me), without having to break up the raid. Still, software RAID does have two big advantages, aside from cost, over hardware RAID. First, it has file system awareness and so can use a copy-on-write scheme to avoid write holes. Second, it doesn't introduce any additional points of failure, or at least none that I can see.
 
Last edited:
For dual-boot, you can either run everything as NTFS, or install a driver for extfs in Windows.... but both options are terrible. A third option, which I would recommend, is dropping the hard disks in a dedicated server, or NAS, then accessing via NFS or Samba. This would of course be slower than local access, but based on the list of activities you want to do, gigabit speeds should be fine.

As far as RAID levels, it sounds like you have put a lot of thought into this already -- however, based on the activities you listed: coding, compiling, typing stuff up in LaTeX, gaming, and watching movies -- none of these are particularly IO intensive. So in all cases, you won't see a noticible difference in speed between the RAID levels for those activities, with the possible exception of swap/paging file -- for that, you can just use a SSD. You should be able to use RAID 5 everywhere (though, don't go throwing 20 disks in a single RAID5 -- run separate arrays to limit risk).

I work in a 5 9's environment, we typically use RAID1 for OS, and RAID5 for data, except on database servers, or other high-IO situations, where we use RAID10 instead. Given, we have no single-server setups, everything is in redundant pairs or clusters -- usually geo-redunant on top of that -- I have experienced many, many, many disk failures, but I have never seen two disks fail at the same time. Closest I had was one disk failed, and one SMART pre-fail warning in the same array. That was a nail-biter, but it turned out fine. More importantly, none of my friends or collegues have ever had that situation.

It sounds like you are worried about corruption and bitrot and similar. RAID is not a solution to these. The purpose of RAID is only to provide fault tolerance -- a system using RAID should be able to tolerate the failure of a hardware component without significant operating impact. This means that a disk will fail, but the server is still on and working.

RAID does not protect against corruption, bitrot, deletion, faulty writes, etc. Backup does. Backup is the only thing protecting you against these, and other edge cases you are talking about. You don't need RAID6, you need a good backup of all that data.
 
...It sounds like you are worried about corruption and bitrot and similar. RAID is not a solution to these. The purpose of RAID is only to provide fault tolerance -- a system using RAID should be able to tolerate the failure of a hardware component without significant operating impact. This means that a disk will fail, but the server is still on and working.

RAID does not protect against corruption, bitrot, deletion, faulty writes, etc. Backup does. Backup is the only thing protecting you against these, and other edge cases you are talking about. You don't need RAID6, you need a good backup of all that data.

I agree. One of my mantras is "RAID is not a backup!" I didn't post that here, however, because the OP said in his first post that he didn't need RAID, he just wanted it. RAID does have a certain amount of Geek appeal in addition to many other perfectly valid reasons for having it. However, RAID (other than 0) is not intended to protect against all causes of data loss—as you said, only separate backups can do that—but, instead, is intended to guarantee continuity of operation should one HDD (or more, depending on the RAID being used) fail.
 
ZFS checks for errors on write. Hardware raid run scheduled checks for errors. I'd rather have a file system that checks automatically, as opposed to one that checks sporadically.

with hardware raid, you're going to spend money on a good raid card that could better be spent somewhere else. But it's your money, not mine.
 
Back