• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Horrible performance on RAID5 @ SB850

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Ueland

New Member
Joined
Nov 19, 2011
Hi!

I am running 3xSAMSUNG HD103SJ SATA2 drives in RAID5 on a SB850 chipset RAID controller. But the performance is nothing but horrible:

http://h3x.no/dump/sucky-performance.png

Any tips on how i can fix this? Should i consider going down to RAID0?(Not too good since i want some safety on the data)

I have tried to:
- Upgrade drivers
- Set BIOS to SATA2
- Made sure that the drives firmware is Ok
 
Hi there ,
Did you try any buffering options ? Don't know what board are you using. Try to play a bit with AMD RAIDXpert / logical drive / cache options and system cache ( in device manager / drives ). Maybe this will help.
RAID5 isn't really fast but shouldn't drop to ~1MB/s in this config.
 
There may be a problem with caching, like Woomack mentioned, and you can also see inconsistent results if the drives are full and/or heavily fragmented.

That said, RAID 5, despite what you might have heard, isn't really a high performance option. Specifically, it will have very slow random write performance and potentially inconsistent sequential write performance. Every time a block is written to the array it has to recalculate parity, which involves reading the other blocks and then writing the parity block. Every write becomes a write-read-write, which destroys performance.

This is mitigated somewhat by high-end RAID controllers, but onboard RAID controllers typically don't handle RAID 5 very well. I'd recommend RAID 0 for speed or RAID 1 if you want redundancy. RAID 1+0 will give you both but requires two drives. Depending on what you use the drives for, RAID 1 for storage and an SSD for speed might be a better option.
 
Hi there ,
Did you try any buffering options ? Don't know what board are you using. Try to play a bit with AMD RAIDXpert / logical drive / cache options and system cache ( in device manager / drives ). Maybe this will help.
RAID5 isn't really fast but shouldn't drop to ~1MB/s in this config.

I have been there and changed everything, nothing changes performance wise.

A funny thing is that via the RAIDXpert tools, i cannot change any cache settings for the RAID, all options for different cache types (read ahead, write ahead etc) are disabled :sly: So i have only changed the settings that Windows let me change.

There may be a problem with caching, like Woomack mentioned, and you can also see inconsistent results if the drives are full and/or heavily fragmented.

That said, RAID 5, despite what you might have heard, isn't really a high performance option. Specifically, it will have very slow random write performance and potentially inconsistent sequential write performance. Every time a block is written to the array it has to recalculate parity, which involves reading the other blocks and then writing the parity block. Every write becomes a write-read-write, which destroys performance.

This is mitigated somewhat by high-end RAID controllers, but onboard RAID controllers typically don't handle RAID 5 very well. I'd recommend RAID 0 for speed or RAID 1 if you want redundancy. RAID 1+0 will give you both but requires two drives. Depending on what you use the drives for, RAID 1 for storage and an SSD for speed might be a better option.
I have multiple RAID`s in other system, even normal software RAID`s without any performance issues. I have always been skeptic about the performance on intigrated RAID controllers but this performance is just redictious :p

The least i would expect is performance around what one disk would give me or even better when it comes to reading since RAID5 actually reads data from N disks at the same time.

I am considering to change my setup to a RAID0 on the controller, but if the performance wont change at all then i wont bother with even trying it.
 
you either have a controller issue or a drive problem. That performance is off the charts bad. I tend to you dedicated hardware raid controllers I never get great results from the motherboard controllers.
 
you either have a controller issue or a drive problem. That performance is off the charts bad. I tend to you dedicated hardware raid controllers I never get great results from the motherboard controllers.

Yeah, i am starting to belive that my self. I usually stay away from mobo-controllers but guessed that it in 2011 would be good enough for a gaming rig (lol).

I am starting to lean against trying RAID0 on my rig, and if that fails, getting a cheap HW-RAID controller that gives enough performance.

My file server uses mdadm SW-RAID with 11 drives seperated over 3 controllers to give a big RAID6, works like a charm. I would never have done that on a mobo-controller ;)
 
Motherboard controllers are usually fine for RAID 0 and RAID 1, but RAID 5 is more complex and has a lot more overhead. mdadm is probably a lot better than motherboard controllers too. :)
 
I tried to nuke the RAID and switch to RAID0, damn what a difference!

Performance-wise i went from 1-70MB/s to 110-220MB/s, a "bit" better :)

And since i used Reflect i did not even had to re-install the machine, thank god for that.
 
Watch out with RAID0, you may have a drive go down and you and you'll lose the whole array. Make sure you keep good backups.
 
Hehe, of course ;)

Doing both onsite, offsite backup daily and disk cloning from time to time so should be safe.
 
Hi!

I am running 3xSAMSUNG HD103SJ SATA2 drives in RAID5 on a SB850 chipset RAID controller. But the performance is nothing but horrible:

http://h3x.no/dump/sucky-performance.png

Any tips on how i can fix this? Should i consider going down to RAID0?(Not too good since i want some safety on the data)

I have tried to:
- Upgrade drivers
- Set BIOS to SATA2
- Made sure that the drives firmware is Ok

I am running RAID 5 on a board with the SB850 and I'm seeing 120MB/s writes on four year old Hitachi's. I am using md RAID in Linux but you should see similar results by using soft RAID in Windows 7. Don't bother with the whole BIOS RAID thing, just keep them as single disks as far as the BIOS is concerned and create a new array in the disk manager in Windows just be sure to back up anything on them first as this will kill the array and any data on it. Don't go RAID 0. If this doesn't fix it then you either have a bad drive or a bad controller. Have you tested the drives one at a time?

And really guys, the year is 2011, RAID 5 is slower than RAID 0 or RAID 1 but it isn't THAT much slower. The days of pinning a 486 with parity calculations is long over.
 
And really guys, the year is 2011, RAID 5 is slower than RAID 0 or RAID 1 but it isn't THAT much slower. The days of pinning a 486 with parity calculations is long over.
It's not the parity calculation cost. That's trivial. It's the fact that after writing a stripe, the non-parity stripes need to be read, parity calculated (the trivial part) and the parity stripe written.

When you write to a RAID 0 volume, you write the stripe and you're done. When you write to a RAID 5 volume, you write the stripe, read the stripes, then write the parity block. Reading and writing on a spinning disk takes an eternity relative to those parity calculations, and explains why random writing is so much slower on RAID 5 than RAID 0.
 
It's not the parity calculation cost. That's trivial. It's the fact that after writing a stripe, the non-parity stripes need to be read, parity calculated (the trivial part) and the parity stripe written.

I know the read-modify-write turns what was a single operation into three. But the drives in RAID 5 can still do random access independent of the array overall, so in a 3 way set up, 2 members can move on to perform a read while the parity is being modified and written. Since parity is staggered from stripe to stripe this avoids a single drive being a bottleneck. Still unless you are just beating the **** out of the array with writes none of this will be noticeable outside of benchmarks. And for reads RAID 5 acts like RAID 0 so sort of a best of both worlds there.

I was mostly poking fun at the fact that so many people on these forums make it sound like a 50% speed drop when doing RAID 5. :p
 
Personally, I noticed RAID 5 arrays behaving *much* more slowly than a RAID 0 array. I don't know if you've tried it or not, but with motherboard controllers in real-world situations it's dog slow for writing. Felt like at least a 50% speed drop to me. Yours doesn't bug you?
 
Felt like at least a 50% speed drop to me. Yours doesn't bug you?

Not in the slightest. No hanging, no lag, write speeds were over the 100's of MB's but I'm down to the last 100GB's of space out of 1TB so the physical limitations of the disks are becoming noticeable.

I think the problem is with however the BIOS is handling RAID. I've had very good luck letting the OS fully take control of the array. It's the same hardware so I don't really have a good explanation of why it works better. Are some chipsets trying to do parity on their own? I always thought it still used the system CPU via a driver. Either way the OS (Linux or Windows) seems to be more efficient.
 
Oh right, you're using mdadm. I'd expect that to be much smarter, for some reason. Not sure if I'd trust the Windows software RAID... tried it once and it acted sort of funny. It makes everything a dynamic disk, and that makes it hard to pull it out of an array and plug into another computer - for recovery, for example.
 
Oh right, you're using mdadm. I'd expect that to be much smarter, for some reason. Not sure if I'd trust the Windows software RAID... tried it once and it acted sort of funny. It makes everything a dynamic disk, and that makes it hard to pull it out of an array and plug into another computer - for recovery, for example.

I gotta say, my mdadm raid5 array, 5x2tb is pretty fast. Reading seems to top out around 450mb/s, hovers around 400mb/s. Write seems to sit between 320mb/s and 400mb/s. The array is always responsive and never have any issues with it. mdadm has gotten nice over the years.
 
Since i want disk image backups of my gaming rig i will not play around with multiple partitions on my system. And windows does still *not* support soft raid on C: (yay!)

And since i do disk image backups of my gaming rig AND have a disk to spare when one die i dont see the issue really. If a disk die it will take me 2 hours to get back up and gaming ;)

FYI: My mdadm RAID with 11 1.5T drives in RAID6 gives me upwards of 700MB/s in read performance. Rather insane. Too bad it is used as a NAS :p

I did write about my performance tweaking of mdadm here if anybody likes to have a look on some tips :)

http://h3x.no/2011/07/09/tuning-ubuntu-mdadm-raid56
 
FYI: My mdadm RAID with 11 1.5T drives in RAID6 gives me upwards of 700MB/s in read performance. Rather insane. Too bad it is used as a NAS :p

If you were feeling adventurous you could turn that into an iSCSI drive. Then windows will be able to mount it as if it were a local drive. You would still need the network connections to keep up with the array to see full benefit.
 
Since i want disk image backups of my gaming rig i will not play around with multiple partitions on my system. And windows does still *not* support soft raid on C: (yay!)

And since i do disk image backups of my gaming rig AND have a disk to spare when one die i dont see the issue really. If a disk die it will take me 2 hours to get back up and gaming ;)

FYI: My mdadm RAID with 11 1.5T drives in RAID6 gives me upwards of 700MB/s in read performance. Rather insane. Too bad it is used as a NAS :p

I did write about my performance tweaking of mdadm here if anybody likes to have a look on some tips :)

http://h3x.no/2011/07/09/tuning-ubuntu-mdadm-raid56


Haha you wrote that? I found that via google when I was optimizing my raid5...stripe width helped a lot. I used 8192 prior to going higher. It seems 32k helped stabilize throughput rather than make it overall higher.
 
Back