• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Asus A8N-VM CSM nVidia Raid Controller.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

yorkman

New Member
Joined
Jan 10, 2008
I've owned this mobo for a few years now and have been using it with RAID-0 (two Seagate SATA 250GB hdd's) ever since.

Today, I bought 2 more SATA 250GB Seagate hdd's and created a 2nd array for creating scheduled backups using Acronis TrueImage. I made the 2nd array raid-1, mirroring each other. This means I have 500GB total in the 1st array, and 250GB with fault tolerance in the 2nd array. The 500GB obviously has no fault tolerance. I figured this way, I can still have very good Read performance (no Write performance gain/loss) and have backups of the most important 250GB of data out of the total 500GB of data in the 1st array (0, Stripe). I also would take advantage of the 250GB capacity increase instead of losing both drives to redundancy, if I was to use them in raid-5 instead...not to mention the major loss of speed in writes.

My question is...if the raid controller fails in my motherboard (A8N-VM CSM), or the motherboard dies and I buy a different one, will I lose any data off the 2nd array (mirroring) or will I still be able to access its data by connecting one of them (or both hdd's) directly (without raid) to the new motherboard, which has a different raid controller? I do realize I'll lose the 1st array, but if I didn't lose the data in the 2nd array then I'd have access to the Acronis backups which reside on both hdd's in the 2nd array.

Another words, if the raid controller fails, and I take one or both of those hard drives to another desktop pc, raid or no raid, will I still be able to read the entire 250GB of data on either hard drive? If not, how can I protect myself from this without sacrificing more cash, performance in reading or writing, or 100% storage loss capacity?

When $$ permits, I'd like to buy a separate raid controller in the future but for now I'm stuck with what I have. You're welcome to add better suggestions on how I can maximize more speed, lose only 50% of storage capacity for redundancy and still have a backup in case 1 hd fails, or 1 hd has its data lost because of a virus. The raid will give me hardware failure protection only. But with Acronis' TrueImage's backups on the redundant array, I have the data on that array quite safe as long as both hd's don't die, or a virus doesn't destroy THAT data.

Thanks.
 
Last edited:
Yeah, RAID 1 is so close to standard simple drives that you can generally take a drive out of a RAID 1 array and have it recognized in a different machine as a standard drive. However, RAID is not a standard, and there are no guarantees that a particular vendor's implementation will be the same as any other vendor's implementation, so there's a tiny chance that things aren't as expected and nVIDIA's RAID 1 has some proprietary compatibility-breaking encoding.

The best way to learn about these things is to test failure modes at the beginning, while you still can afford to lose data and reconfigure / re-build.

The second fallback (if not the first) would be to use another nVIDIA RAID-enabled chipset. Each vendor usually provides backwards-compatibility into their RAID implementations, so an nForce 430 array should be readable for example by an nForce 630 chipset. To use this, you enable the RAID in the BIOS, but you do not define the array there -- you let the BIOS auto-detect it. I've done this with nForce 3 and nForce 430, but again, there are no general guarantees that you can do this from any chipset to any other chipset. Note also that RAID levels / features sometimes come into play -- e.g. nForce 3 doesn't support RAID 5, so you couldn't go backwards from an nForce 430 RAID 5 to nForce 3.

Another fallback then is just simple external backups. This could take the form of a drive in an external enclosure which is off and disconnected most of the time; active only during backups. This setup would handle a couple of additional failure scenarios -- e.g. malware wiping/corrupting all drives. e.g. massive PSU failure damaging all the connected drives as once. The external drive would have a separate power supply, and ideally be disconnected and off most of the time, avoiding such failures.
 
Well, I just bought a new mobo, P5E-VM. Enabled raid but couldn't see or access any data. Had to create new arrays and obviously that lost the data. It's OK though as I still have it on the other 2hd's. Of course, I won't be able to access that data unless I install the old mobo into another system with those hd's. So, it appears as though if the mobo dies and I buy a new one with a different raid chipset (nVidia -> Intel) then my data is lost.

You may still be right. Perhaps if I bought a mobo with an nVidia chipset but different model then it may have worked...but I still don't believe it would. I can't see how you can access data on raid drives if you haven't re-created the array(s).

On another note. I saw someone post review's of his system. He had a Q6600 cpu and 4 hdd's (500GB each) setup with raid0 and raid5. It was an Abit mobo with the same raid controller I have now (ICHR9). I have 4GB of DDR2-800 ram (2x2GB) and running XP64. How could he be getting over 3GB/s transfer speeds according to HD TACH while I only get 315MB/s?! He posted the screenshots so it's hard not to believe him. Granted, I'm using only two 250GB hdd's as raid0 but still...I was expecting much more than 315MB/s. With the pc I just replaced and the nVidia controller on it, I was getting 215MB/s so only 100MB/s improvement?!

I'm getting the feeling he somehow forged those screenshots. That is just too much of a difference with me having similiar hardware as him now.

Yeah, RAID 1 is so close to standard simple drives that you can generally take a drive out of a RAID 1 array and have it recognized in a different machine as a standard drive. However, RAID is not a standard, and there are no guarantees that a particular vendor's implementation will be the same as any other vendor's implementation, so there's a tiny chance that things aren't as expected and nVIDIA's RAID 1 has some proprietary compatibility-breaking encoding.

The best way to learn about these things is to test failure modes at the beginning, while you still can afford to lose data and reconfigure / re-build.

The second fallback (if not the first) would be to use another nVIDIA RAID-enabled chipset. Each vendor usually provides backwards-compatibility into their RAID implementations, so an nForce 430 array should be readable for example by an nForce 630 chipset. To use this, you enable the RAID in the BIOS, but you do not define the array there -- you let the BIOS auto-detect it. I've done this with nForce 3 and nForce 430, but again, there are no general guarantees that you can do this from any chipset to any other chipset. Note also that RAID levels / features sometimes come into play -- e.g. nForce 3 doesn't support RAID 5, so you couldn't go backwards from an nForce 430 RAID 5 to nForce 3.

Another fallback then is just simple external backups. This could take the form of a drive in an external enclosure which is off and disconnected most of the time; active only during backups. This setup would handle a couple of additional failure scenarios -- e.g. malware wiping/corrupting all drives. e.g. massive PSU failure damaging all the connected drives as once. The external drive would have a separate power supply, and ideally be disconnected and off most of the time, avoiding such failures.
 
So, it appears as though if the mobo dies and I buy a new one with a different raid chipset (nVidia -> Intel) then my data is lost.

RAID is not a standard, and cross-vendor, incompatibility is the general rule.

You may still be right. Perhaps if I bought a mobo with an nVidia chipset but different model then it may have worked...but I still don't believe it would. I can't see how you can access data on raid drives if you haven't re-created the array(s).

The RAID configuration is stored on the drives themselves. When you enable RAID in the BIOS, but don't define the array there, it should read the drives, find the configuration data there, and then define the array. Of course when you're dealing with incompatible implementations, this doesn't work. But it does work as a rule under certain conditions within a vendor's chipsets. As I said, I've done it between nForce 430 and nForce 3. There are plenty of reports online of other people doing this successfully, especially between different Intel implementations.

If this didn't work at all, you'd lose your data with any motherboard swap, even an identical one, or even a CMOS reset.

When you define an array in the BIOS, you're typically creating a new array, wiping any old configuration data which might have been present. This is why, as a rule of thumb, you shouldn't re-define the array, but leave the system to auto-detect it, once RAID is enabled.

On another note. I saw someone post review's of his system. He had a Q6600 cpu and 4 hdd's (500GB each) setup with raid0 and raid5. It was an Abit mobo with the same raid controller I have now (ICHR9). I have 4GB of DDR2-800 ram (2x2GB) and running XP64. How could he be getting over 3GB/s transfer speeds according to HD TACH while I only get 315MB/s?! He posted the screenshots so it's hard not to believe him. Granted, I'm using only two 250GB hdd's as raid0 but still...I was expecting much more than 315MB/s. With the pc I just replaced and the nVidia controller on it, I was getting 215MB/s so only 100MB/s improvement?!

Those have to be cached or buffer/burst results, which are not very important. Cache/burst results will depend on the speed of the rest of the system -- its memory access speed, drive cache size, CPU cache size, etc. It has very little to do with the speed of the RAID array. For drive/array performance, sustained speed is much more important, and these won't come anywhere near the above inflated figures.
 
Last edited:
Back