• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Raid 0 4 ssd's

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

zinsco

Member
Joined
May 15, 2002
If I Raid 0 four vertex 2 ssds, will I get 4 times the speed making it over 1gb's?
 
Nope, I think it goes

2 Drives = 100% increase from 1 drive
3 drives = 150% increase from 1 drive
4 drives = 175% increase from 1 drive
 
Nope, I think it goes

2 Drives = 100% increase from 1 drive
3 drives = 150% increase from 1 drive
4 drives = 175% increase from 1 drive

I'm pretty sure you won't get a 100% increase by adding a second drive.

I would guess maybe 80% increase max. Like SLI, you won't get 100% performance increase.
Just what I think :)
 
If I Raid 0 four vertex 2 ssds, will I get 4 times the speed making it over 1gb's?

Theoretically, you'll get up to (key words!) 4x the read speed of one drive. However, it all depends on something else in the system not bottlenecking it.

Note that RAID will only increase read and write speeds, not access times, and it never scales exactly as n(write-speed), where there are n drives.
 
I have:

Rampage III
980x

Does that factor into the equation?
 
I believe RAID addon cards are also typically faster than the motherboard's built in chipset.
 
With 2 Vertex 2s, you'll get a bottleneck with the Intel ICH10R (on Rampage III).

So to take advantage of 4 SSDs, you'll definitely need a RAID card until the SSDs support SATAIII...
 
All RAID cards I know of will be even slower than the ICH10R because the processor on the RAID card isn't fast enough to keep up with the IOPS of several SSDs.
 
I seriously doubt you will max the ICH10R's IOPS on an end-user desktop PC IMNSHO. If you are the odd-ball power-user that actually needs enterprise level IOPS - then you might have some concerns - but that is probably 0.0001% of the users HERE, and 0.00000001% of all PC Users (made up on the spot - but likely close to accurate).

What are your typical useage scenarios that require 4x SSD's worth of bandwidth? 2x X25M's is overkill for my desktop - I just needed more capacity :)

My pair of X25M G1's in RAID-0 scaled almost linearly (x2) on the ICH10R with regard to raw bandwidth (as I'm NOT going to be hitting an IOPS bound bottleneck as an end-user).

The ICH10R is hard to beat as an SSD RAID-0 controller...

:cool:
 
Well, I read this thread and the guy shows 4 ssd's at only like 500mb's and when he took one away he had like 628mb's.

That would seem to confirm the 650 max with the ICH10R, unless the ICH10R will do a little better with a higher cpu and or MB. He was using E760 Classified with an i7 920. What do you guys think? Has anyone seen 4 ssd's higher with the ICH10R?
 
if the ich10r is maxed out then obviously the cpu or board would make no difference and that sounds about right for the ich10r sata transfer speed.

areca and lsi make good raid controller. their better models could handle the data throughput of 4 ssd's in raid0. they will cost you an arm and leg and then some thou.
you want to look for a raid card with an Intel IOP34x processor. IOP341 is the lowest one and probably the most affordable out of that series


*edit*
some cards with IOP348
http://www.newegg.com/Product/Produ...ption=iop348&bop=And&Order=PRICE&PageSize=100

some cards with IOP341
http://www.newegg.com/Product/Produ...ption=iop341&bop=And&Order=PRICE&PageSize=100
 
Last edited:
the LSI cards are good especially the new 9260 but the 9211 is good too
 
As PC Per's Allyn Malventano stated in their last episode - even the Intel IOP based Hardware RAID Cards (as my ARC1222/ARC1220/ARC1210 have) have too much overhead to keep up with more than 2 SSD's IOPS. The lower-overhead controllers like the ICH10R and Silicon Image stuff will actually get you beter IOPS than the $500 Areca Cards!

So - if IOPS are your primary concern (mainly tiny files as in Database Server type applications) - the ICH10R might be your best bet. The Areca cards will allow more Bandwidth as they have a fatter pipe (PCIe 8x) - but the small files will still be bound by the controller's IOPS performance. Larger Files mean less IOPS by their very nature (you can saturate the interface bandwidth with one large 100GB file). Smaller Files = More IOPS to saturate the available interface bandwidth - and the controller chokes as the overhead is just insane.

Do you really need Enterprise Level IOPS performance? Are you running a heavy-duty database server where small files are accessed by dozens of pepople simultaneously? Or do you just want to move regular sized files (~1MB or larger) at blazing speeds and have some silly fast benches to post?

I'd say a pair of X25M's in RAID-0 on a ICH10R is all ANYONE will need save for the few extreme Database Server type applications that actually need that level of IOPS. I'm hoping the X25M G3's will have un-crippled Writes by the end of the year - and we can expect more symmetrical Read/Write performance from them! :clap:

:cool:
 
Last edited:
the rampage III has 4 pci-e lanes

No, it has 4-x16 pci-e slots, only 32 lanes can be used from those 4 slots in these configurations:

x16
x16/x16
x16/x8/x8
x8/x8/x8/x8

Now there is an additional x4 pci-e slot that I believe you can get an additional 4 lanes from to bring the total lanes from the pci-e's to 36. Someone correct me if I'm wrong. Or, if that's what you meant by saying the rampage III has 4 pci-e lanes, since you already knew I was using up 32 lanes with my 2 cards. So, if I got a raid card it would have to be x4 pci-e. Are any of the cards you guys have mentioned x4 and able to handle 4 ssd's? Not that that would be the way to go, but I would like to know.
 
I initially did motherboard RAID. Didn't notice anything different in performance. Which matters to me more than throwing numbers around. But hey, if ICH10R is good...then it's good.
 
Back