Sorry for the late reply. This has been quite the week.
Before I go on, I haven't explored NAS as an alternative to keeping drives inside my computer, nor am I opposed to it, although I still favor the idea of an overkill add-in card.
What is your goal for the system? What type of capacities do you need? I'm just trying to understand because there are probably much cheaper ways of accomplishing your goal.
RAID 6 is likely overkill. RAID 1 can be run without performance issues on standard hardware.
Do you need high speed or large capacity? Expandability?
I just think other options will suite you better, i.e. either going straight up dedicated NAS system or going with large HDD's and RAID 1 and 4TB drives....or even RAID 10/01 can be handled without much issue on your motherboard.
The goal of my system is to be my multi-function computing system that I use for everything. Most of what I do is coding, compiling (mostly as a requirement for my OS), typing stuff up in LaTeX, gaming when I can, and watching movies.
As I said at the beginning of my post, and as you can see from what I use my computer for, I don't need a dedicated RAID card. My money would be better spend on a SSD for caching and then use RAID 5 via some sort of fake RAID. I wouldn't use Intel's (via my main board) as my system tends to hang randomly every now and then when it is in use (regardless of OS in use), but full-on hardware RAID is overkill, regardless of the level.
Everything will be run off the RAID.
That said I am looking for a combination of six things: Speed, redundancy, and scalablity, flexibility, capacity and mobility. Between the first two, RAID 10 or 5 would be fine. scalability is easy to accommodate so long as what ever I'm plugging these drives into has lots of ports, and allows for RAID expansion. Most add-in card, as well as Linux software RAID (don't know, or care about, Windows option), offer this. For flexibility I mean that I can have multiple, different arrays on the drives. My plan is to have RAID 6 for long term storage that isn't accessed frequently, RAID 5 for programs, my download folder, as well as other temporary folder, and then 0 for SWAP and a paging file (I dual-boot). Those add-in cards I've seen, as well as the one I've used in the past offered this feature, and I assume that Linux software raid offers this. As for capacity, this sort of rules out RAID 1 (and therefore 10) as it has 50% efficiency. My problem is that I'm a pack rat and keep everything on my system. I also have off-line back-ups, of course, however I also like to keep things on hand. as for mobility, I like the idea of being able to take my storage system and move it elsewhere if needed. I imagine that NAS is King in this category, but I know very little about running an OS, and programs from a NAS.
As for why I'm hung up on RAID 6 over RAID 5, it has to do with the fact that RAID 5 only offers a single symbol (block) of redundancy per data set (stripe). My issue is during reconstruction, what happens in the event of an unrecoverable read error. This is unlikely on my system since for the foreseeable future I'll be using drives with a 1 in 10^-15 URE rate, and I just don't have that much data at the moment, but the chances are non-zero. However I will be increasing this. Furthermore the idea of a case where there is a an unidentified
write error troubles me. In this case its not possible to decide which symbol is correct -- the redundant symbol or the original. This is not to mention the case of an incorrectly computed XOR symbol, though unlikely. In general the chances of a write hole are greater with RAID 5 and 1 due to only have one level of redundancy.
I don't know if that answers all of your questions or not.
There is also the fact that I simply
want a dedicated card or subsystem independent of the mainboard/CPU.
That sounds like a lot of fun. I was looking to do a project like that a year or two ago, but I changed my mind. Instead of going for hardware RAID, I decided that software RAID was the winner for me. I now use both mdraid (standard Linux software RAID) -- I use this for a RAID1 of two OS disks, and ZFS RaidZ -- I use this as my main storage array.
I can tell you, I am glad I went this way. Much, much cheaper, and more flexability. Nowadays the only big advantage that you get with hardware RAID is slightly better detection of hardware failures. Software RAID is limited to that the OS can detect, so there is a slightly higher risk of a disk that has failed, but has not been detected by the OS as failed causing problems. The rest is all very similar, just using host CPU and memory resources, which are plentiful nowadays.
I actually had a SSD installed as a read cache disk (L2ARC) in my ZFS, but found that it didn't actually make any difference... my server has 32GB of RAM, of which, about 23GB is disk cache right now, so adding another 120GB of second-layer cache on top of that really didn't make an impact. I played around with a SSD write cache (ZIL) on my ZFS, though that made a measurable impact for write speeds, the actual difference was a few MB/sec, because my disks could keep up fairly well with gigabit speeds already.
One thing I wanted (and still really want to do) is to throw money into hotswap bays. When I was looking before, I really liked Addonics gear. They basically take a ground-up approach to NAS devices... you can buy the parts individually or a fully-assembled NAS. The parts include a large variety of IO cards, and hotswap bays (like this one:
http://addonics.com/products/aesn5da35-a.php, which fits in your computer). That opens up quite a few options, either store & power disks locally in your computer, or get a storage tower, fill it with disks, and connect it back to your computer using eSATA or iSCSI (more of a NAS approach).
My problem with pure software RAID is that I dual-boot.
This, of course, can be worked around by using a JOBD card, then making two seperate arrays -- one for Windows and one for Linux -- then using each OS's software RAID method. Still, I like being able to test for bad blocks, which all the the mid range add-in cards can do (or so the sales reps tell me), without having to break up the raid. Still, software RAID does have two big advantages, aside from cost, over hardware RAID. First, it has file system awareness and so can use a copy-on-write scheme to avoid write holes. Second, it doesn't introduce any additional points of failure, or at least none that I can see.