• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Insane Raid Controller Performance - Areca RAID Controller Input Needed

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Well, DDR3 would have around 11GB/s bandwidth and likely much lower access times - it's closer to the CPU - but illegal is illegal. That's DDR 3 that's installed as main memory. I'd imagine the RAM drives that emulate hard drives through software would be slower.

*Edit* Right - the controller's going to be very important if you're not using main memory.

Yeah... Sorry for the confusion... I had 2 seperate ideas running through my head.

You deciphered them though... LOL
 
By the way, there are 2 guys who go back and forth in the rankings on pcmark near the top - mtech and stevero. Mtech does the acard 9010 ramdrive thing, not sure which controller - I think an LSI maybe? Stevero uses the 1880ix, older brother to what we're looking at, with c300 SSDs.
 
No no no.

If you plug 4 SSDs into a single SAS port you are using the FIS based switching on that "PR" states gives each drive full speed. this is NOT true. It splits the bandwidth between the 4 drives. It is better than command based switching. (which gives full bandwidth to any single drive on the port at one time. So in effect if you striped them it would give 6Gbps to the first drive, then 6Gbps to the second drive etc. which increase latency and does not saturate the bus in 99% of cases)

If it was a SAS 24Gbps port they would advertise it that way. It is advertised as 6Gbps and it cant magically achieve a +18Gbps rating. For a 4 port SAS controller, All 4 ports combnied will achieve maximum 24Gbps rating (assuming it is on a sufficently large PCIE bus) But attaching 16 maxiops would still only give you 24Gbps maximum bandwidth, and most likely the controller would choke long before that.

The card contains a multiplexer allowing you to attach multiple drives to a single port (thats what the first S in SAS stands for, Serial attached scsi).

You only want one SSD per SAS port. HOWEVER...

If you wanted to use cheaper/slower drives, then attaching a couple of them to each SAS port might be a benefit however I think the overhead of using the multiplexer would negate that.

EDIT: At most I would do 2 SSDs per port, you will lose some top speed (since a single maxiops will saturate 6Gbps), but will make it up in the small file sizes which is generally about half the BW of a 6Gbps port on the anyway.

EDITEDIT:
atto-482x6151.jpg
Single maxiops, hits 50% saturation at 8K file size reads and beyond that on writes.
 
Last edited:
@ I.M.O.G.
blame ASUS that they didn't give cache options in RAID ROM
this is my old result with 2x M4 on Gigabyte UD5, the same that gives ~220MB/s in PCM05 system boot or 1GB/s+ in virus scan

raidm4d6sm2.jpg

You will get similar results in PCMarks on some SAS cards.
There is option or get best overclocking board and pay for RAID card or get something that will give lower max clocks but will have good integrated RAID controller ...
 
@Railgun: Nice Atto. What configuration and hardware is that you posted results for?

@Neuromancer: I'm going to call Areca and ask them about what you and railgun are talking about, to see if they can give me confirmation on their end. We can save $400 if the 6 internal ports don't benefit us.

@Woomack: I was looking at that when you had mentioned it in PM as well, as you can almost get by on the right board without the discrete RAID controller - however these are the categories we're after:

1x core, 2x core, 3x core, 4x core, 6x core, and 8x core

So there are 3 problems with using good onboard RAID solutions like in your example:

1. We'd need to find the right boards for each one of these categories that will do the CPU clocks we need and play well with cold (clocks are our biggest advantage, as often the CPU clocks on the top scores are not that high)
2. Wed need to find the right boards for each one of these categories that will do the RAID performance we need
3. Hitting ~220 XP startup and Virus scan of 1000 is the goal, however we also need very strong general usage scores of 300-800MB/s (hopefully more towards 800MB/s) - I believe your setup would lag on the general usage scores

So the setup you are showing is great, however to take 1st in every category, or at least the top 3, we need to be strong in every subtest. The RAID setup also has the advantage that once configured correctly, it is just a matter of transfering the card into the next system, and our storage tests are maximized - no tweaking and configuring to rebuild the RAID0 setup on each platform.

By the way, convicted1 and I are going in on this together to cut the costs.
 
I'm just saying that if your board had full RAID with cache options then you could try to make good results without spending so much money on RAID card. I have no idea how it's acting with 4 or 6 SSD in RAID0. I just gave example on 2 cheapest SSD and you are right, general usage is about 120-150MB/s on this config so it's nothing special.
For sure all Adaptec controllers are bad idea because of longer access time. The same for cheaper LSI ( if cheap can be ~$400 ).
Here is one review that can be interesting http://thessdreview.com/featured/ls...-6805-raid-card-the-great-6gbs-raid-showdown/
... but probably you won't find many better controllers than that areca.
CHV is best for high cpu clocks and memory scores in all AMD categories so all you need is good RAID controller.
I was already looking for some cheaper controllers but better idea in my case can be Revo x2+ ;) ...
I will pm you if I find anything good.
 
Railgun is right.

I looked up SAS 6Gbps specs ... A single SAS 6Gbps port provides up to 24 Gbps of bandwidth.

Crazy naming convention they got going on there lol.

I was confusing SATA multiplexing with SAS which is not the same.
 
I appreciate all of the input from both of you, and the time it took to look into things further. Now I'm sure this means the $840 card will serve us just as well as the $1200 card. And use one of the $18 breakout cables for 4 drives instead of 4 of the $9 cables.
 
Just FYI, this is what we're going with (we bumped the drives up to 4 instead of 3 we were targeting earlier):
1x controller: http://www.newegg.com/Product/Product.aspx?Item=N82E16816151112
2x cables: http://www.newegg.com/Product/Product.aspx?Item=N82E16816116098
1x cache ram: http://www.newegg.com/Product/Product.aspx?Item=N82E16820139141
4x max iops: http://www.newegg.com/Product/Product.aspx?Item=N82E16820227714

We only need 1 cable, but with 2 we can see if there is any difference between 2 per cable or 4 per cable.

Total just tips $1800. Then its just a matter of putting the cold to the chips and GPUs and chasing 1st place. :D
 
have you looked at the new samsung drives? i cant see newegg from work but i think for drive issues you may want to go with samsung or a newer crucial drive(s) instead. i am not talking about just ocz drives but anyone of them using the SF controller.
 
evilsizer: I haven't looked much, but the vertex 3 maxiops are the best drives I am aware of for this purpose. if you can find something else you think might be better, I will check them out. waiting to receive a 5870 in the mail and will make the purchase in a week or so. In my prep so far, I talked with stevero who used c300's and he mentioned the drives don't matter much for pcmark, so it will be interesting to see how more expensive drives compare to less expensive drives.

I could probably livestream the pcmark05 8 core attempt. :) All the prep will be done ahead of time, so for the record it will probably just be a matter of running the bench a few times.

my fx-8120 that took first in all but wprime ended last nights session not booting with a "CPU initial" error message. usually that just means it is too frozen/soggy/cranky. it has a couple more days to dry out before i try it again, but it makes me a little nervous because I had made about a dozen wprime1024 runs at 1.87-1.98v on the core. :eek:
 
Last edited:
You will get similar results in PCMarks on some SAS cards.
Yup, total controller bandwidth is often a limiting factor in previous generations (e.g. 3ware 9690, LSI 8888.) Magnetic drives only have a few MB/s average throughput in typical applications so it didn't matter in pre-SSD days.
 
Yep...

And I'll livestream the 4x and 2x... Possibly 6x. Just gotta figure out if I wanna do SB-E or something else.

I'm working on tearing the MIVE-Z and 55X 2600K out of my daily rig and replacing them with something more "Daily" oriented and not so geared at OCing.

As for drives... This is one thing I have researched extensively... And the MaxIOPS are the drives we need at this point in time. Not even the new Octanes can put up the IOP numbers that these drives can.

Also... I don't know what 5870 you got Matt, but the one I got from Ed is supposedly the cream of the crop of 5870's... MSI Lightning. So if you'd like to use it... Just let me know. I won't be needing it till I get my turn with the "Storage Solution".

PM Reply incoming also...
 
cool, good deal. I will give this 5870 a run first. mine is from HIS, and the last HIS card I got was very strong - so I am hopeful, but there is nothing special about the model I have coming. except the price, it was only 90 bucks since it has a bad fan controller. :) if I take you up on that offer, I can prep the card in liquid electrical tape if you like. that worked great on my 4890.

by the way, I saw 1st place in single core was taken back by Intel yesterday. I will have to look closer to see if I can tackle it with a sempron, or maybe we will have to go Intel for 1 core also.
 
Cool... Just let me know... I wouldn't mind having somebody with more experience prep that Lightning.

Also... One of these nights (probably Monday or Tuesday) I'm going to be doing a Livestream while I setup my bench to go cold for the first time.

Was just going to stream the whole thing, that way I can get input as I go along about the ENTIRE process of insulating.

Hopefully some of you experts will stop in to help.

Then the next day I'll fill the dewar, and later that night start hammering on some 775 stuff.

I've been working on setting up some pretty trick stuff for Livestreaming... First off being a hardwired connection so I avoid the drop outs we were having last time due to weak WiFi signal, then I've setup several cameras to switch views on the fly... One of which is a full on HD Handheld Video Camera on Firewire.
 
Sweet livestream goodness. Hopefully I'll be able to watch.

I really wish I had the time/money/patience for this myself, and am not relegated to the sidelines.
 
@Railgun: Nice Atto. What configuration and hardware is that you posted results for?

4x 120G Vertex 3s on an 1880i in a stripe @ 128k.

Interestingly, this has been my biggest gripe to one of our vendors. With some monitoring hardware we have, we had 3, 5 disk RAID5 arrays aggregated in I think a 1600 series Areca card. It would only give us 2Gb throughput. For a box that had 4-8 Gb interfaces, this didn't cut it and it was the limitation of the controller.

Now, in the case of today's gen, you're now limited to 4GB, big jump, and in today's drives, with big cards, it's easier to achieve. That said, the most you can ever hit on that card, technically is 4GB. If you're being sponsored, I'd be interested to see what that thing will do with eight drives off that single card.
 
Last edited:
Back