• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Article on RAID0 for SSDs

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I ran my 2 Agi4 256gb in a raid0 setup for a month or two, then destroyed the raid to run them as 2 256gb SSD. I prefer to have my data safe and split my work on two driver than having Epeen numbers to show that gives nothing on day2day use. I dont even noticed the "lose" in performance when going back in normal mode VS raid 0. I had ~825mb/s numbers in R0 VS ~425mb/s in single mode with these cheap drives.
 
LOL, Janus! No doubt...! If I had room. The sig is just about obnoxious at this point.

EDIT: Trimmed and done. :)
 
Last edited:
Well now I'm wondering what the performance difference would look like if you went with RAID 1 or 10 (1+0). I'm guessing it would be the same in that benchmarks would look great, but no major difference in real world application (except in the redundancy department).
 
Well now I'm wondering what the performance difference would look like if you went with RAID 1 or 10 (1+0). I'm guessing it would be the same in that benchmarks would look great, but no major difference in real world application (except in the redundancy department).
You forgot the much lighter cow skin monetary retaining device. :beer:
 
I admit I did my Vertex 3s RAID0 initially, but after seeing how many issues with firmware were being reported...I quickly split them up. Thankfully they have never failed me and they house all my non-Steam games. The M4 takes care of the rest.

It's too bad. Wouldn't it be sweet if we could have been booting Windows and loading into all of our games rather instantly? SSD sales would skyrocket, but so would failures for those unfortunate enough to have shoddy firmware.
 
Well now I'm wondering what the performance difference would look like if you went with RAID 1 or 10 (1+0). I'm guessing it would be the same in that benchmarks would look great, but no major difference in real world application (except in the redundancy department).
r1 would shoe the same performance as one drive since the other is for parity/mirroring.
 
Really depends on what performance envelop you will be pushing, system use, etc. In some cases, raid 0 is applicable, but for most desktop systems probably not needed. I think they summed it up pretty well.
 
Great read. I have RAID 0 on my current laptop, cost me a pretty penny and it was my first venture into SSD. I was amazed at the performance and thought it was the R0 but it in fact it was just the HDD vs. SSD.

I was considering going 2x 840s in R0 but... not now. Editing my sig :)
 
RAID0 on SSD looks much better on high end RAID controllers but then part of this performance is just caching.
I wonder how it would work if you could use additional caching on single SSD. I have no additional caching device to check that right now and Intel technology won't let me set SSD as caching drive for other SSD.
I still have to check one day how hybrid drive will work with additional SSD for cache.
 
Exactly...

Put it on a proper controller and redo the test. Caching shouldn't be discounted.

While I appreciate the ability to use some onboard HW, it's not nearly great HW. I haven't used the onboard stuff for my main PC for years, and now my secondary PC is using my old controller...staying away from the onboard stuff.
 
I posted this in another thread but it was quickly shot down.

Still, its interesting to note, that the Revo PCI-E SSD drives actually utilize two SSD disks configured in a RAID 0 with an onboard controller to get those amazing read and write speeds. Of course, the onboard controller is designed to minimize the inherent latency issues of a raid 0 and virtually eliminates all the negative problems associated with it, so you cant really compare it to standard raid 0 config.

That being said I know it doesn't apply to a standard controller on a motherboard, but still proves the brute force potential that a raid 0 has when you address all of the problem areas.
 
That's true, although you still have to bootup ROM that you have to sit through that takes an additional few seconds.
 
Revo still has random transfers like single SSD and most Revo are using pretty slow Silicon controllers without any additional options to set. There is only option to set RAID0/1 and maybe 1-2 more.
Intel or even AMD controllers that you can find on the new boards are performing better than most cheaper pcie controllers ( count HBA or everything without built-in cache ). Main reason is that these controllers are using only CPU for calculations and no matter what CPU you choose , it will be faster than the one on the RAID card. Other thing is that you can set cache write back option without any problems while controllers without cache have it usually locked what means all random transfers will be much lower.
Integrated controllers are fine to set RAID 0, 1 or even 10 but for RAID 5/6/50 pcie RAID cards are almost always better and for these RAIDs you need cache or performance will be low. In general you shouldn't set any other RAID than 0/1/10 on SSD but that's other thing.
 
Revo still has random transfers like single SSD and most Revo are using pretty slow Silicon controllers
true but a revo will still kick *** and offers ridiculous large file transfer speeds. It has its downfalls of course and doesn't like to boot off some motherboards but they are improving.
 
Holy resurrection posts!
1) As stated in post #3, it is in Joe's sig line.
2) Let's do this with NVMe's. Come on.

I mean, LTT has that $1 million machine running all NVMe's. What could I do with 2 x 1TB NVMe's for $100/ea.
 
Oh it's been done. I think @Woomack did some testing on it.

That said, I wish there was a reason to do so with these pcie m.2 drives! All about showing off tho!
 
Last edited:
Back