• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Insane Raid Controller Performance - Areca RAID Controller Input Needed

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

I.M.O.G.

Glorious Leader
Joined
Nov 12, 2002
Location
Rootstown, OH
I'm looking to get into a serious overkill RAID0 storage setup for a very specific purpose. The only point is PCMark05, it won't be used for anything else. It's probably stupid, there are cheaper ways to get really fast storage, and it might be a bad idea... But I wanted to start a thread on it to get any useful input you guys might have on how to do what I'm doing the right way. I'm targeting 1st place in the world in PCMark05 on 8x CPU cores, then also other core configurations. I have a few questions.

The model RAID controller I'm getting is the Areca 1882IX. It comes in various flavors, but the one I'm getting is upgradeable to 4GB onboard cache - that is key to the performance I need on the HDD general usage and HDD virus scan PCM05 subtests. Other than that, the only variable is the number of internal/external connectors. This will be in an open air benching rig, so internal or external makes no usability difference.

First question, the upgradeable 4GB onboard cache. Per specs, it takes "One 240-pin DIMM socket for DDR3-1333 ECC single rank registered SDRAM module using x8 or x16 chip organization, upgrade from 1GB (default) to 4GB (ARC-1882ix-12/16/24)". I'm not sure what it mean by x8 or x16 chip organization. Looking on Newegg, I don't see any ECC DDR3-1333 that appears to have only 8 or 16 chips on the DIMM - am I being an idiot, and what sort of ram should I be getting to put this thing at 4GB? It is essential for my needs that I upgrade it to 4GB from the default 1GB.

Second question, these are the configurations I am considering purchasing. Only tangible difference is the number of internal SAS connections. Is there any functional/technical difference between the external or internal connections I may not be aware of? Drives will be sitting loose on my workbench, no enclosure or anything. I'm starting off with only 3 SSDs, but may move to 6 or more in the future if performance scales:
3 internal, 1 external: http://www.newegg.com/Product/Product.aspx?Item=N82E16816151112
4 internal, 1 external: http://www.newegg.com/Product/Product.aspx?Item=N82E16816151109
6 internal, 1 external: http://www.newegg.com/Product/Product.aspx?Item=N82E16816151114

Third question, which cables should I be getting to connect the SAS ports to the SATA SSDs? I'm targeting one drive per 6GB/s port, as I expect that would be optimal? However more drives may be added as necessary if more are needed, in which case I want to be able to expand the RAID0 array to 6 drives or more - ideally with as many as possible on their own ports. Will it affect performance doubling up drives per port? These are the cables I was considering getting, however they are expensive as crap. I need SFF-8087 for internal connections, and sff-8088 for external connections: http://www.newegg.com/Product/Product.aspx?Item=N82E16812200884

Are there less expensive cable options I am not finding?

FYI, I'm starting with 3 vertex 3 maxiops. Initial outlay of cash will be something to the tune of $1500-2000 for the storage.
 
Last edited:
This RAM will work, I think: http://www.newegg.com/Product/Product.aspx?Item=N82E16820139141 It has 9 chips per side, but one of those is going to be used for parity on ECC RAM, so it's really a 16 chip module not counting the parity chips.

The other questions I'm not so useful for. :)

*Edit* Turns out the RAID controller requires single rank RAM to take advantage of the full capacity, so this isn't quite right.
 
Last edited:
That is a big help, thank you!

Essentially, I think each port is limited to 6GB/s like a regular sata III port, but I don't know how heavily a good SSD like the max iops can saturate that - I'd think surely with 4 drives on 1 SAS port there would be some kind of contention for bandwidth, which is why I'm considering the model that costs a few hundred bucks more to get 7 ports total.
 
Last edited:
Those are yummy RAID controllers. I'll follow this thread :)
There's probably no difference between internal and external SAS ports on the controller other than the connector. Each SAS connector is 4x 6GB/s. The SFF-8088 connectors (external) only mate with cables that are expensive and connect to external JBODs or backplanes so you want to connect to the SFF-8087 internal ports. The cable you mention is OK but these look better: http://www.newegg.com/Product/Product.aspx?Item=N82E16816116098 or http://www.newegg.com/Product/Product.aspx?Item=N82E16816103196. You can connect multiple drives per SAS connector without bandwidth degradation if the controller is any good - the 4 ports are supposed to be completely independent and they are on the 3ware and LSI controllers I've worked with.
 
You can connect multiple drives per SAS connector without bandwidth degradation if the controller is any good - the 4 ports are supposed to be completely independent and they are on the 3ware and LSI controllers I've worked with.

Thank you for all your input. From what I've read in reviews, the controller is very good, as its an improvement over the previous generation 1880 which was also very good. Can you clarify the part I quoted? 4 drives on a single port/cable, will it perform just as well as 4 drives on 4 separate cables/ports? I'm a noob at this.

This makes a huge difference in price between the low end and high end model, but I don't want to compromise if it will cost performance. I'm at $1600 currently with the cheapest version of the controller, if I went with the biggest brother that would be an extra $400 on top.
 
Can you clarify the part I quoted? 4 drives on a single port/cable, will it perform just as well as 4 drives on 4 separate cables/ports? I'm a noob at this.

This makes a huge difference in price between the low end and high end model, but I don't want to compromise if it will cost performance. I'm at $1600 currently with the cheapest version of the controller, if I went with the biggest brother that would be an extra $400 on top.

Yes, the 4 PHYs on one 4x port don't share bandwidth so it shouldn't matter if you connect 4 drives on the same 4x SAS port or to 4 different ports.
As long as there's no other difference between the controllers (I haven't read the specs) the smallest should give the same throughput as a bigger one.
 
So I understand what you`re trying to get at, when you say 1 disk per 6g port, what are you thinking?

Taking a SFF8087, you`re not suggesting one per fanout are you? IE only one on the four available ports? You`re not even going to come close to hitting that 6g limit that way and you`re just going to degrade your potential that way.

This review should point you in the right direction if you`re just looking for a big number.
 
thanks, i had read that review as well as some comments by the author on XS. :)

to explain what I was thinking, I thought maybe its better to put one maxiops per sff8087 - bandwidth, contention, whatever. in which case if that were so, I may want to get the model controller with 6 internal ports, and pick up 3 or 4 Breakout cables to start - one for each drive I am starting with.

basically, I don't know if that matters to peak performance so I brought it up. a sata III drive performs much better on a sata III port than a sata II - I haven't ever used SAS in a PC, so I don't know if a breakout cable is like putting four drives on one sata III connection.

hope that helps clarify. I just don't understand some of the particulars exactly, which is why I wanted to start this thread and look stupid - i knew you guys would fix me up. :D
 
Gotcha. No, is the simple answer. Each connection to a drive is a dedicated 6Gb connection. That port on the controller has four lanes. So each single port can handle 24Gb (3GBish)...technically.

It`s a x8 card, so the max it can ever do is 8GB bi-directionally, 4GB in one direction. You can load up that card, but you`re going to be limited to the slot`s bandwidth...technically. As you can see in that other article, they only hit 4GB with three cards. The big cache will help, but real world, I think you`ll end up on par or a bit better than their single card tests. But, only time will tell.

That said, not having done this myself in terms of really researching, take your disks and try it...1 across 4 or all 4 on 1 and see what the differences are. For what you`re specifically doing, one may be better than the other.
 
Cool, that is clear and simple - I didn't exactly get that out of the article. Given your input, I'll probably start with buying only 2 cables. I can get more down the road if needed. :)
 
That is a big help, thank you!

Essentially, I think each port is limited to 6GB/s like a regular sata III port, but I don't know how heavily a good SSD like the max iops can saturate that - I'd think surely with 4 drives on 1 SAS port there would be some kind of contention for bandwidth, which is why I'm considering the model that costs a few hundred bucks more to get 7 ports total.

Bandwidth sharing comes in one of two forms with multiplexing

FIS or command based

Command based switching is the most common and writes happen to only one drive at a time it is slow and not recommended.

FIS based splits the bandwidth up to each drive, although this will limit you in effect with the maxiops it is still the faster of the two forms, it works VERY well with 4 sata3 HDDs as any single one will only be 1/4 of the bandwidth anyway.

You do not want to use either of these. One SAS port to one SATA drive when using Maxiops SSDs.


Single SAS to SATA cables are available, quick google search puts them about $9 a pop I am sure you can find them cheaper especially if buying bulk.

Also RAID card you have to find out the processor on it. Most RAID cards are limited in performance were as the Intel controller is not limited since it uses the CPU for storage processing. I think it was Dominick32 that did a test on this with early SSDs. I know the processors have improved on RAID cards, and being SATA 6Gbps capable I would assume your card has that. always best to check it out though. the smaller the stripe the harder the processor has to work. Also consider the minimum cell size on the SSD in question. That minimum size * 4 would be ideal stripe size.

If you cant find the cell size on the nand used, just do some experiments with 64K 128K 256K stripes :)

I am sorry I do not have the time at the moment to lookup the processor used on that card to see what actual limitations are. Maybe when I get home tonight :)
 

Thanks, that is a big savings. :)


Bandwidth sharing comes in one of two forms with multiplexing

FIS or command based

Command based switching is the most common and writes happen to only one drive at a time it is slow and not recommended.

FIS based splits the bandwidth up to each drive, although this will limit you in effect with the maxiops it is still the faster of the two forms, it works VERY well with 4 sata3 HDDs as any single one will only be 1/4 of the bandwidth anyway.

You do not want to use either of these. One SAS port to one SATA drive when using Maxiops SSDs.


Single SAS to SATA cables are available, quick google search puts them about $9 a pop I am sure you can find them cheaper especially if buying bulk.

Also RAID card you have to find out the processor on it. Most RAID cards are limited in performance were as the Intel controller is not limited since it uses the CPU for storage processing. I think it was Dominick32 that did a test on this with early SSDs. I know the processors have improved on RAID cards, and being SATA 6Gbps capable I would assume your card has that. always best to check it out though. the smaller the stripe the harder the processor has to work. Also consider the minimum cell size on the SSD in question. That minimum size * 4 would be ideal stripe size.

If you cant find the cell size on the nand used, just do some experiments with 64K 128K 256K stripes :)

I am sorry I do not have the time at the moment to lookup the processor used on that card to see what actual limitations are. Maybe when I get home tonight :)


Thanks for the comments, very informative. IOPS are important for the results in PCM05 so this is very relevant to what I'm trying to do. Your comment here directly contradicts what Railgun mentioned though. Do either of you have a source or reference I could refer to in order to confirm one way or the other, or maybe someone else who has done this can confirm one way or another? This is a big deal for me to get right, as it means the difference between getting 3 internal SAS connectors and 6 internal SAS connectors, which means about $400 on the total price with the RAID controller and cables I purchase.

The processor on my raid controller is a dual core raid on chip 800Mhz beastie. The reviews I've read talk about it very highly. The card itself is the next generation of the 1880IX, which stevero is currently using to stomp all over the PCMark05 rankings - so I am 100% that the hardware on this card can deliver what I need it to. Also, its only raid0 so the processors won't be churning through parity or anything like that. So my only dilemma in this area is physical connections/cabling to ensure I don't limit IOPS/bandwidth/whatever by the way I hook the storage up to it.

If you, or anyone else, have any further comment or insight, I am eagerly listening. :) Thanks guys!
 
What Neuromancer mentions applies if you split the bandwidth of a SAS port with a port expander. We do that in our systems, running 12 or 16 magnetic drives on a 4x SAS connector. Maybe some cheapo RAIDs do that with all ports but enterprise RAID controllers don't. Each of the 4 SAS PHYs on a port goes to a separate set of pins on the IO ASIC
The processor doesn't matter much. It doesn't handle the traffic anyway - there's an ASIC for that.
The Tweaktown folks got up to 1.7 GB/s IIRC with the smaller brother of these so that's a clear indication that Areca is not sharing SAS bandwidth between the ports.
 
So rrohbeck, one drive per port or is 4 per port fine since I'm not using a port expander? From what you said, it sounds to me like that means 4 drives with a breakout cable would be fine.
 
So does PCMark05 know if you're running this on a RAMDrive? I imagine there would be some way to fool it... and then your problems would be solved. Get yourself 32GB of RAM and boot Windows off of it.
 
The hardware ramdrive option people use for this is the acard 9010's - ran in RAID0 as well. They still use it with a fairly heavyweight controller like what we are using I believe. I think our setup will work better than that sort of setup.

Essentially, either way you need a good raid controller - its just a question of SSD or ramdrive you connect to it.
 
Software RAMDrives are illegal...

Plus... I think this configuration should stomp a RAMDrive.
Well, DDR3 would have around 11GB/s bandwidth and likely much lower access times - it's closer to the CPU - but illegal is illegal. That's DDR 3 that's installed as main memory. I'd imagine the RAM drives that emulate hard drives through software would be slower.

*Edit* Right - the controller's going to be very important if you're not using main memory.
 
Back