• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Slow Raid 5 on Z77 (ASRock Extreme 4)

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

IAmMoen

Member
Joined
Apr 9, 2006
Location
Falcon Heights, MN
So I have 3 WD black 1.5 tb drives in a raid5 hooked up to my ASRock Extreme4 that has a Z77 chipset. They are all sata3 but plugged into the sata2 ports on the board. I have a SSD populating one of the sata3 ports (on the Z77).

Couple issues right now that I can't seem to remedy or find answers to.

1) The raid 5 seems slow. I have an ATTO benchmark of it attached. Looks really weird to me. Anyone see something like this before? Anyone rocking the same setup as me but rocking some way better speeds? I am thinking this must be related to stripe size at this point (which is 64kb on this raid right now).

2) When you set the 6 sata ports (2x sata3, 4x sata2) on the board to be RAID it apparently doesn't allow any other drives connected to be ahci enabled? At least that is what it appears. I have my SSD plugged into one of the sata3 ports and it just will not do ahci. Does anyone else who has this board confirm that this is or is not your experience as well?
 

Attachments

  • Screen Shot 2013-07-30 at 8.43.57 AM.png
    Screen Shot 2013-07-30 at 8.43.57 AM.png
    36.2 KB · Views: 1,089
Last edited:
Hey man, I don't think it's running slow. I have a Raid 0 and I get 120/130 MB/s. If you are reaching almost 200 in the best scenario, I think you are ok. Remember that this isn't a dedicated Raid controller, the onboard raid controller is ok for basic things. I believe 3 drives in Raid 5, you are probably asking as much as these controllers can handle.
 
Does it feel slow during normal use? Maybe it's just the test.

It doesn't feel super fast to me to be honest. I thought the system would feel faster than my old AMD rig but it doesn't feel like a massive improvement. I am still moving things back and forth from my SSD trying to find the right config that makes it speedy for me but so far it seems the same. I use Adobe Lightroom a lot and it feels like it takes a long time for stuff still. Just figured with a SSD that can perform like the screenshot below and a way better proc it would feel faster. Maybe that is just not in the cards.
 

Attachments

  • Screen Shot 2013-07-30 at 11.11.07 AM.png
    Screen Shot 2013-07-30 at 11.11.07 AM.png
    29.6 KB · Views: 990
Here you have a screen from the Raid 0. You may want to consider it as an option for faster results during your work, and maybe use the 3rd drive standalone as a back up storage. You'll be 1.5TB short but you may not need to back up everything.
 

Attachments

  • Capture.PNG
    Capture.PNG
    51.2 KB · Views: 995
Alright I did Crystal Mark. I think you'll be surprised.

As for Raid 0 vs Raid 5 I really do need to make sure everything on that raid is backed up. I have been thinking about going towards a Synology 1513 or Drobo 5D or 5N. Though I am starting to think by looking at some of these benchmarks I would probably be happy enough by just getting a LSI raid card. I mean the write speeds below leave something to be desired in my opinion.
 

Attachments

  • Untitled-1.jpg
    Untitled-1.jpg
    32.3 KB · Views: 988
Interesting, from what I can see, the problem is writing, reading speeds look good to me, which makes sense a little bit since you have the extra load of writing the parity bit. I wonder why the impact is so high.

I'm not familiar with raid cards but I'm sure that will work much better for you. You may want to wait a bit more to see if some other folks here have a better explanation of this issue.
 
This isn't a writing parity penalty issue, there is something wrong. Read speeds are fine, writes are bad on 64k and prior (his stripe size is 64k). The issue is probably alignment or stripe size. A RAID card for this is silly.

With RAID enabled, it should switch any non-RAID disks to AHCI by default.
 
This isn't a writing parity penalty issue, there is something wrong. Read speeds are fine, writes are bad on 64k and prior (his stripe size is 64k). The issue is probably alignment or stripe size. A RAID card for this is silly.

With RAID enabled, it should switch any non-RAID disks to AHCI by default.

Thideras, there's no doubt something's wrong, but it isn't true that writing parity doesn't impact in performance. Just google it real quick, you'll see a lot of results, but I found this article from oracle which I believe it has more reputation than some random forums.

Scroll down to Selecting the Best RAID Level
 
I think we can all agree that there should be SOME penalty but nowhere near what we are seeing. I think next step is to hear from someone with a 3 disk raid 5 on a Z77 chipset (preferably the same board) and see what their speeds are like. I spent 45 minutes googling last night trying to find something like that but couldn't :(
 
Do you have data already in the Raid 5? If not, I'd try to Raid 0 the 3 of them and check speeds. Maybe one disk may have issues? Can you benchmark them individually? Also check speeds of a RAID 0 of just 2 drives.
 
Yea I already have 1.2 tb of info on the raid 5. It isn't that I can't work it so that I can get all the data off of it or anything but just a pita. I would be much more willing to do it knowing that there is a light at the end of tunnel but it is tough to know. I wish the Intel raid allowed for testing of individual drives without breaking up the band so to speak.
 
Thideras, there's no doubt something's wrong, but it isn't true that writing parity doesn't impact in performance. Just google it real quick
You read my post wrong. I said this problem isn't attributed to the write parity penalty, not that there isn't a penalty. His write speeds are horrendous below 128k file sizes. I wrote the Storage Megathread thread/sticky at the top of this forum and specifically included a section about write penalty for RAID 5, and even go into detail on parity itself. There is always a penalty somewhere once you start adding levels between the OS and disk and/or calculations, but that isn't the problem.

Yea I already have 1.2 tb of info on the raid 5. It isn't that I can't work it so that I can get all the data off of it or anything but just a pita. I would be much more willing to do it knowing that there is a light at the end of tunnel but it is tough to know. I wish the Intel raid allowed for testing of individual drives without breaking up the band so to speak.
With data on the array, there is only so much troubleshooting you can do. I have a Z77 board, but I don't have an easy way to test this for you. You could try enabling write back cache to see if that helps speed things up, but I don't think it will make much difference. Please note that enabling this feature is a test, don't leave it on. Failing that, I think you will need to break the array to try either different stripe sizes or different RAID levels.
 
Last edited:
You could try enabling write back cache to see if that helps speed things up, but I don't think it will make much difference. Please note that enabling this feature is a test, don't leave it on. Failing that, I think you will need to break the array to try either different stripe sizes or different RAID levels.

Why should it only be a test? Is there a risk of something at all? Not that I am not going to do a backup first or anything I am just curious.

I think different stripe sizes will definitely be something I have to try. I wouldn't blame intel for that at least. But if this is something wrong with the Z77 and raid 5 I am going to be a bit miffed.
 
So I went into the IRST control panel and saw that write back cache is indeed disabled but it doesn't give me an option of enabling it. On some other forums they suggest that not having a UPS makes that option not available. Not sure. I did however have Disk Data Cache enabled on the array. I disabled it just to see what the performance hit would be. In a word? Severe.

Note: I did stop the test before it got to the 4K QD32 section.
 

Attachments

  • Screen Shot 2013-07-31 at 5.04.39 PM.png
    Screen Shot 2013-07-31 at 5.04.39 PM.png
    73.2 KB · Views: 968
Enabling write back cache keeps writes in memory until it is efficient for the array to write it to disk. So, if you were writing thousands of 4kb files to the disk, it isn't going to write each file individually, as that would be very slow. Instead, it keeps them in memory until certain parameters are met (it varies, but usually the cache being full, the disks being idle, no more files being added to the queue, or any mixture of those or more). Then, it just dumps them to the disk all at once. You can see how this would be an incredibly performance increase. However, the downside is that if the files are stored in memory and your system crashes or loses power*, those files are gone. If you were modifying a file and that partial change was held in cache, that change is also lost. This is why it is dangerous. If it doesn't allow you to enable the option, then there isn't much we can do there.

I would say the next step is to get data off the array to try different striping/levels.


*RAID cards can have a built in BBU (battery backup unit) that powers the RAID card's cache. Should the system get powered off or crash, the contents of RAM are held in memory for quite some time. Once the RAID card can start the array, it will see the cache is "dirty" and finish writing changes to the disk. On board sort of has this by using a UPS, but that only helps if the power goes out. If there is a system crash (software/hardware) or a components fails in the system (say, the power supply failing), a UPS isn't going to save you and you run the risk of data corruption. I wouldn't suggest leaving this on.
 
Back