Page 1 of 2 1 2 LastLast
Results 1 to 20 of 21
  1. #1
    New Member
    Join Date
    Nov 2011

    Horrible performance on RAID5 @ SB850

    Hi!

    I am running 3xSAMSUNG HD103SJ SATA2 drives in RAID5 on a SB850 chipset RAID controller. But the performance is nothing but horrible:

    http://h3x.no/dump/sucky-performance.png

    Any tips on how i can fix this? Should i consider going down to RAID0?(Not too good since i want some safety on the data)

    I have tried to:
    - Upgrade drivers
    - Set BIOS to SATA2
    - Made sure that the drives firmware is Ok

  2. #2
    Senior Memory Guru
    Premium Member #19



    Join Date
    Jul 2007
    Location
    Poznan, Poland
    Author Profile Benching Profile Folding Profile SETI Profile Rosetta Profile
    Hi there ,
    Did you try any buffering options ? Don't know what board are you using. Try to play a bit with AMD RAIDXpert / logical drive / cache options and system cache ( in device manager / drives ). Maybe this will help.
    RAID5 isn't really fast but shouldn't drop to ~1MB/s in this config.

  3. #3
    Insatiably Malcontent
    Senior Member
    johan851's Avatar
    10 Year Badge
    Join Date
    Jul 2002
    Location
    Seattle, WA
    There may be a problem with caching, like Woomack mentioned, and you can also see inconsistent results if the drives are full and/or heavily fragmented.

    That said, RAID 5, despite what you might have heard, isn't really a high performance option. Specifically, it will have very slow random write performance and potentially inconsistent sequential write performance. Every time a block is written to the array it has to recalculate parity, which involves reading the other blocks and then writing the parity block. Every write becomes a write-read-write, which destroys performance.

    This is mitigated somewhat by high-end RAID controllers, but onboard RAID controllers typically don't handle RAID 5 very well. I'd recommend RAID 0 for speed or RAID 1 if you want redundancy. RAID 1+0 will give you both but requires two drives. Depending on what you use the drives for, RAID 1 for storage and an SSD for speed might be a better option.
    ASRock Z68 Extreme3 Gen3 | 2500K @ 4.6GHz | 2x4GB Samsung DDR3 | GTX 750 Ti 2GB
    120GB Crucial M4 | 2x 2TB Samsung F4 | Seasonic S12 600w
    y2 DAC --> Custom M^3 --> Custom LM3875 ChipAmp --> Modula MTs
    Dual Dell 2007WFP | Watercooled

  4. #4
    New Member
    Join Date
    Nov 2011
    Quote Originally Posted by Woomack View Post
    Hi there ,
    Did you try any buffering options ? Don't know what board are you using. Try to play a bit with AMD RAIDXpert / logical drive / cache options and system cache ( in device manager / drives ). Maybe this will help.
    RAID5 isn't really fast but shouldn't drop to ~1MB/s in this config.
    I have been there and changed everything, nothing changes performance wise.

    A funny thing is that via the RAIDXpert tools, i cannot change any cache settings for the RAID, all options for different cache types (read ahead, write ahead etc) are disabled So i have only changed the settings that Windows let me change.

    Quote Originally Posted by johan851 View Post
    There may be a problem with caching, like Woomack mentioned, and you can also see inconsistent results if the drives are full and/or heavily fragmented.

    That said, RAID 5, despite what you might have heard, isn't really a high performance option. Specifically, it will have very slow random write performance and potentially inconsistent sequential write performance. Every time a block is written to the array it has to recalculate parity, which involves reading the other blocks and then writing the parity block. Every write becomes a write-read-write, which destroys performance.

    This is mitigated somewhat by high-end RAID controllers, but onboard RAID controllers typically don't handle RAID 5 very well. I'd recommend RAID 0 for speed or RAID 1 if you want redundancy. RAID 1+0 will give you both but requires two drives. Depending on what you use the drives for, RAID 1 for storage and an SSD for speed might be a better option.
    I have multiple RAID`s in other system, even normal software RAID`s without any performance issues. I have always been skeptic about the performance on intigrated RAID controllers but this performance is just redictious

    The least i would expect is performance around what one disk would give me or even better when it comes to reading since RAID5 actually reads data from N disks at the same time.

    I am considering to change my setup to a RAID0 on the controller, but if the performance wont change at all then i wont bother with even trying it.

  5. #5
    Member


    Join Date
    May 2010
    Location
    NYC
    Benching Profile
    you either have a controller issue or a drive problem. That performance is off the charts bad. I tend to you dedicated hardware raid controllers I never get great results from the motherboard controllers.
    Folding User Stats
    CPU: I7 3960X @5000 H2O
    MOBO: ASUS Rampage 4 Extream H20
    RAM: Corsair Dominator 4x2GB 2133 Cl8 (Hypers)
    Video: Evga Titan Tri Sli H20
    Case: Danger Den Custom DoubleWide
    PSU: Silverstone 1500
    LCD: Dell 30" 3007
    SSD: 4x Ocz Vertex 4 120 Raid 0
    H20: Dual D5 Pumps, EK SLi Serial, BP Pump Top, BP Pump Cover, Frozen Q Reservoir, Lamptron FC2, 3x Blackice GTX 480's 2x Blackice GTX 360's

  6. #6
    New Member
    Join Date
    Nov 2011
    Quote Originally Posted by thobel View Post
    you either have a controller issue or a drive problem. That performance is off the charts bad. I tend to you dedicated hardware raid controllers I never get great results from the motherboard controllers.
    Yeah, i am starting to belive that my self. I usually stay away from mobo-controllers but guessed that it in 2011 would be good enough for a gaming rig (lol).

    I am starting to lean against trying RAID0 on my rig, and if that fails, getting a cheap HW-RAID controller that gives enough performance.

    My file server uses mdadm SW-RAID with 11 drives seperated over 3 controllers to give a big RAID6, works like a charm. I would never have done that on a mobo-controller

  7. #7
    Insatiably Malcontent
    Senior Member
    johan851's Avatar
    10 Year Badge
    Join Date
    Jul 2002
    Location
    Seattle, WA
    Motherboard controllers are usually fine for RAID 0 and RAID 1, but RAID 5 is more complex and has a lot more overhead. mdadm is probably a lot better than motherboard controllers too.
    ASRock Z68 Extreme3 Gen3 | 2500K @ 4.6GHz | 2x4GB Samsung DDR3 | GTX 750 Ti 2GB
    120GB Crucial M4 | 2x 2TB Samsung F4 | Seasonic S12 600w
    y2 DAC --> Custom M^3 --> Custom LM3875 ChipAmp --> Modula MTs
    Dual Dell 2007WFP | Watercooled

  8. #8
    New Member
    Join Date
    Nov 2011
    I tried to nuke the RAID and switch to RAID0, damn what a difference!

    Performance-wise i went from 1-70MB/s to 110-220MB/s, a "bit" better

    And since i used Reflect i did not even had to re-install the machine, thank god for that.

  9. #9
    Member Charr's Avatar
    Join Date
    Oct 2006
    Location
    Raleigh, NC
    Heatware Profile
    Watch out with RAID0, you may have a drive go down and you and you'll lose the whole array. Make sure you keep good backups.
    i7 4770k | GTX680
    Maximus VI Hero | 8GB Samsung 30nm
    240GB Seagate 600
    Catleap Q270 LED
    Lenovo W530

  10. #10
    New Member
    Join Date
    Nov 2011
    Hehe, of course

    Doing both onsite, offsite backup daily and disk cloning from time to time so should be safe.

  11. #11
    Quote Originally Posted by Ueland View Post
    Hi!

    I am running 3xSAMSUNG HD103SJ SATA2 drives in RAID5 on a SB850 chipset RAID controller. But the performance is nothing but horrible:

    http://h3x.no/dump/sucky-performance.png

    Any tips on how i can fix this? Should i consider going down to RAID0?(Not too good since i want some safety on the data)

    I have tried to:
    - Upgrade drivers
    - Set BIOS to SATA2
    - Made sure that the drives firmware is Ok
    I am running RAID 5 on a board with the SB850 and I'm seeing 120MB/s writes on four year old Hitachi's. I am using md RAID in Linux but you should see similar results by using soft RAID in Windows 7. Don't bother with the whole BIOS RAID thing, just keep them as single disks as far as the BIOS is concerned and create a new array in the disk manager in Windows just be sure to back up anything on them first as this will kill the array and any data on it. Don't go RAID 0. If this doesn't fix it then you either have a bad drive or a bad controller. Have you tested the drives one at a time?

    And really guys, the year is 2011, RAID 5 is slower than RAID 0 or RAID 1 but it isn't THAT much slower. The days of pinning a 486 with parity calculations is long over.
    Deepthought 2.0:
    AMD FX 8320 4.4GHz Water Cooled
    ASUS Sabertooth 990FX
    8GB Patriot DDR3 2000MHz (clocked@2133Mhz)
    2x Gigabyte 550 Ti 1GB Water Cooled
    2x64GB Crucial m4's in RAID 0 + 3x500GB Hitachi HDD's in RAID 5
    Fedora 16 64-bit KDE Spin


    Folding User Stats

  12. #12
    Insatiably Malcontent
    Senior Member
    johan851's Avatar
    10 Year Badge
    Join Date
    Jul 2002
    Location
    Seattle, WA
    Quote Originally Posted by Zerix01 View Post
    And really guys, the year is 2011, RAID 5 is slower than RAID 0 or RAID 1 but it isn't THAT much slower. The days of pinning a 486 with parity calculations is long over.
    It's not the parity calculation cost. That's trivial. It's the fact that after writing a stripe, the non-parity stripes need to be read, parity calculated (the trivial part) and the parity stripe written.

    When you write to a RAID 0 volume, you write the stripe and you're done. When you write to a RAID 5 volume, you write the stripe, read the stripes, then write the parity block. Reading and writing on a spinning disk takes an eternity relative to those parity calculations, and explains why random writing is so much slower on RAID 5 than RAID 0.
    ASRock Z68 Extreme3 Gen3 | 2500K @ 4.6GHz | 2x4GB Samsung DDR3 | GTX 750 Ti 2GB
    120GB Crucial M4 | 2x 2TB Samsung F4 | Seasonic S12 600w
    y2 DAC --> Custom M^3 --> Custom LM3875 ChipAmp --> Modula MTs
    Dual Dell 2007WFP | Watercooled

  13. #13
    Quote Originally Posted by johan851 View Post
    It's not the parity calculation cost. That's trivial. It's the fact that after writing a stripe, the non-parity stripes need to be read, parity calculated (the trivial part) and the parity stripe written.
    I know the read-modify-write turns what was a single operation into three. But the drives in RAID 5 can still do random access independent of the array overall, so in a 3 way set up, 2 members can move on to perform a read while the parity is being modified and written. Since parity is staggered from stripe to stripe this avoids a single drive being a bottleneck. Still unless you are just beating the **** out of the array with writes none of this will be noticeable outside of benchmarks. And for reads RAID 5 acts like RAID 0 so sort of a best of both worlds there.

    I was mostly poking fun at the fact that so many people on these forums make it sound like a 50% speed drop when doing RAID 5.
    Deepthought 2.0:
    AMD FX 8320 4.4GHz Water Cooled
    ASUS Sabertooth 990FX
    8GB Patriot DDR3 2000MHz (clocked@2133Mhz)
    2x Gigabyte 550 Ti 1GB Water Cooled
    2x64GB Crucial m4's in RAID 0 + 3x500GB Hitachi HDD's in RAID 5
    Fedora 16 64-bit KDE Spin


    Folding User Stats

  14. #14
    Insatiably Malcontent
    Senior Member
    johan851's Avatar
    10 Year Badge
    Join Date
    Jul 2002
    Location
    Seattle, WA
    Personally, I noticed RAID 5 arrays behaving *much* more slowly than a RAID 0 array. I don't know if you've tried it or not, but with motherboard controllers in real-world situations it's dog slow for writing. Felt like at least a 50% speed drop to me. Yours doesn't bug you?
    ASRock Z68 Extreme3 Gen3 | 2500K @ 4.6GHz | 2x4GB Samsung DDR3 | GTX 750 Ti 2GB
    120GB Crucial M4 | 2x 2TB Samsung F4 | Seasonic S12 600w
    y2 DAC --> Custom M^3 --> Custom LM3875 ChipAmp --> Modula MTs
    Dual Dell 2007WFP | Watercooled

  15. #15
    Quote Originally Posted by johan851 View Post
    Felt like at least a 50% speed drop to me. Yours doesn't bug you?
    Not in the slightest. No hanging, no lag, write speeds were over the 100's of MB's but I'm down to the last 100GB's of space out of 1TB so the physical limitations of the disks are becoming noticeable.

    I think the problem is with however the BIOS is handling RAID. I've had very good luck letting the OS fully take control of the array. It's the same hardware so I don't really have a good explanation of why it works better. Are some chipsets trying to do parity on their own? I always thought it still used the system CPU via a driver. Either way the OS (Linux or Windows) seems to be more efficient.
    Deepthought 2.0:
    AMD FX 8320 4.4GHz Water Cooled
    ASUS Sabertooth 990FX
    8GB Patriot DDR3 2000MHz (clocked@2133Mhz)
    2x Gigabyte 550 Ti 1GB Water Cooled
    2x64GB Crucial m4's in RAID 0 + 3x500GB Hitachi HDD's in RAID 5
    Fedora 16 64-bit KDE Spin


    Folding User Stats

  16. #16
    Insatiably Malcontent
    Senior Member
    johan851's Avatar
    10 Year Badge
    Join Date
    Jul 2002
    Location
    Seattle, WA
    Oh right, you're using mdadm. I'd expect that to be much smarter, for some reason. Not sure if I'd trust the Windows software RAID... tried it once and it acted sort of funny. It makes everything a dynamic disk, and that makes it hard to pull it out of an array and plug into another computer - for recovery, for example.
    ASRock Z68 Extreme3 Gen3 | 2500K @ 4.6GHz | 2x4GB Samsung DDR3 | GTX 750 Ti 2GB
    120GB Crucial M4 | 2x 2TB Samsung F4 | Seasonic S12 600w
    y2 DAC --> Custom M^3 --> Custom LM3875 ChipAmp --> Modula MTs
    Dual Dell 2007WFP | Watercooled

  17. #17
    Member ziggo0's Avatar
    10 Year Badge
    Join Date
    Apr 2004
    Location
    La Porte, Indiana
    Quote Originally Posted by johan851 View Post
    Oh right, you're using mdadm. I'd expect that to be much smarter, for some reason. Not sure if I'd trust the Windows software RAID... tried it once and it acted sort of funny. It makes everything a dynamic disk, and that makes it hard to pull it out of an array and plug into another computer - for recovery, for example.
    I gotta say, my mdadm raid5 array, 5x2tb is pretty fast. Reading seems to top out around 450mb/s, hovers around 400mb/s. Write seems to sit between 320mb/s and 400mb/s. The array is always responsive and never have any issues with it. mdadm has gotten nice over the years.

  18. #18
    New Member
    Join Date
    Nov 2011
    Since i want disk image backups of my gaming rig i will not play around with multiple partitions on my system. And windows does still *not* support soft raid on C: (yay!)

    And since i do disk image backups of my gaming rig AND have a disk to spare when one die i dont see the issue really. If a disk die it will take me 2 hours to get back up and gaming

    FYI: My mdadm RAID with 11 1.5T drives in RAID6 gives me upwards of 700MB/s in read performance. Rather insane. Too bad it is used as a NAS :P

    I did write about my performance tweaking of mdadm here if anybody likes to have a look on some tips

    http://h3x.no/2011/07/09/tuning-ubuntu-mdadm-raid56

  19. #19
    FYI: My mdadm RAID with 11 1.5T drives in RAID6 gives me upwards of 700MB/s in read performance. Rather insane. Too bad it is used as a NAS :P
    If you were feeling adventurous you could turn that into an iSCSI drive. Then windows will be able to mount it as if it were a local drive. You would still need the network connections to keep up with the array to see full benefit.

  20. #20
    Member ziggo0's Avatar
    10 Year Badge
    Join Date
    Apr 2004
    Location
    La Porte, Indiana
    Quote Originally Posted by Ueland View Post
    Since i want disk image backups of my gaming rig i will not play around with multiple partitions on my system. And windows does still *not* support soft raid on C: (yay!)

    And since i do disk image backups of my gaming rig AND have a disk to spare when one die i dont see the issue really. If a disk die it will take me 2 hours to get back up and gaming

    FYI: My mdadm RAID with 11 1.5T drives in RAID6 gives me upwards of 700MB/s in read performance. Rather insane. Too bad it is used as a NAS :P

    I did write about my performance tweaking of mdadm here if anybody likes to have a look on some tips

    http://h3x.no/2011/07/09/tuning-ubuntu-mdadm-raid56

    Haha you wrote that? I found that via google when I was optimizing my raid5...stripe width helped a lot. I used 8192 prior to going higher. It seems 32k helped stabilize throughput rather than make it overall higher.

Page 1 of 2 1 2 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •