OCFreely101
New Member
- Joined
- Jul 24, 2020
I have a raid made up of 6 WD Re4's:
https://hdd.userbenchmark.com/SpeedTest/5792/WDC-WD2503ABYX-01WERA0
https://gzhls.at/blob/ldb/7/6/1/e/ee7a79edc97b885933949eeefdb2d9fbdf1b.pdf
Writes seem realistic per drive to the userbench at around 95mb/s. When put in any raid configuration with mdadm it never goes above 120mb/s. It basically acts like a single drive with redundancy. It should be getting up to 3x.
This is getting the correct read speed for Near2.
<- this is with Gnome Disk Utility
This is my realistic speeds.
I've tested this with 4x and 6x drives and every combo of raid 10 and raid 1+0 and all chunk sized and I can't get more than 120mb/s. What is making the excessive overhead? Or what causes it to begin with?
BTW, if I make a raid 1 out of these drive the write goes to 69mb/s in gnome-disk-utility. This seems to show when it's introduced. It is then further reduced by the overhead of putting the raid 1's into a raid 10. It realistically runs between 90-120mb/s in all configs. Is there a way to reduce this overhead and run a raid 10 to it's maximum performance?
I know they can get better performance because I've run with fedora 25(on an install usb) over my current fedora 32 and it was capable of running closer to full speed. I'm not familiar with linux enough to know what is causing what.
My (wild)guesses are around either alignment, file system, or things like asynchronous write or other odd settings. (obviously I have no idea.)
I've tried maximising apm and other things and it went up to like 130mb/s. Literally acting like one disks.
Could it also be things like 32bit limitations or not using enough cored/threads? Does anyone know why this acts this way?
I originally wanted to do offset so I could get double sustained read and get a max read speed near my SSD. (And 850 pro 256gb drive @540mb/s.)
Edit: I can run it in a raid 0 and get closer to some of the numbers I want, but I don't want to run it in raid 0 in case of data loss. and it's still not up to the read speeds in practice. I don't understand what makes it act this way.(literally, I don't know and am curious.)
BTW, what does it mean that my raid drive is located at /dev/md/localhost-live.localhost-live.attlocal.net:10
Does that mean it's on a network or just part of my systems internal name?
Is there also a way to get it to use asynchronous writes. I read it's better and lowers CPU overhead. I think I got it to max out 3-4 cores using a different file system manager or something with raid 0.
I tested with an install folder of warthunder at 33 gigs. If I ran it under Fedora 32 it runs at 3-5 minutes for a copy job. If I do it under the Fedora 25 install usb I get around 1 minute for a copy... This is some serious real world performance impact. What is the difference between these two?
One other oddity is that it seems to normally only max out one cpu core if any. I had one config (i forget which one) was using more.
Do I need to activate a tristate thing on either the hdd's with hdparm or on the bios? edit: I may be thinking of memory tristate. Not sure if that is the same thing or not.
Could DQS Training help at all? Or would that hinder?
So far neither of those did anything that I can tell. I'll try turning off virtualization next.
https://hdd.userbenchmark.com/SpeedTest/5792/WDC-WD2503ABYX-01WERA0
https://gzhls.at/blob/ldb/7/6/1/e/ee7a79edc97b885933949eeefdb2d9fbdf1b.pdf
Writes seem realistic per drive to the userbench at around 95mb/s. When put in any raid configuration with mdadm it never goes above 120mb/s. It basically acts like a single drive with redundancy. It should be getting up to 3x.
Code:
sudo hdparm -Tt /dev/md/localhost-live.attlocal.net:10(redacted)
Timing cached reads: 5790 MB in 2.00 seconds = 2895.92 MB/sec
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 866 MB in 3.08 seconds = 281.53 MB/sec
<- this is with Gnome Disk Utility
This is my realistic speeds.
Code:
sudo mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Fri Jul 31 02:37:02 2020
Raid Level : raid10
Array Size : 735129600 (701.07 GiB 752.77 GB)
Used Dev Size : 245043200 (233.69 GiB 250.92 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Aug 2 19:45:38 2020
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 2048K
Consistency Policy : bitmap
Name : localhost-live.attlocal.net:10 (local to host localhost-live.attlocal.net)
UUID : 93a69ccd:be91b7b5:2d7dd8ea:e1630957
Events : 6
Number Major Minor RaidDevice State
0 8 32 0 active sync set-A /dev/sdc
1 8 48 1 active sync set-B /dev/sdd
2 8 64 2 active sync set-A /dev/sde
3 8 80 3 active sync set-B /dev/sdf
4 8 96 4 active sync set-A /dev/sdg
5 8 112 5 active sync set-B /dev/sdh
I've tested this with 4x and 6x drives and every combo of raid 10 and raid 1+0 and all chunk sized and I can't get more than 120mb/s. What is making the excessive overhead? Or what causes it to begin with?
BTW, if I make a raid 1 out of these drive the write goes to 69mb/s in gnome-disk-utility. This seems to show when it's introduced. It is then further reduced by the overhead of putting the raid 1's into a raid 10. It realistically runs between 90-120mb/s in all configs. Is there a way to reduce this overhead and run a raid 10 to it's maximum performance?
I know they can get better performance because I've run with fedora 25(on an install usb) over my current fedora 32 and it was capable of running closer to full speed. I'm not familiar with linux enough to know what is causing what.
My (wild)guesses are around either alignment, file system, or things like asynchronous write or other odd settings. (obviously I have no idea.)
I've tried maximising apm and other things and it went up to like 130mb/s. Literally acting like one disks.
Could it also be things like 32bit limitations or not using enough cored/threads? Does anyone know why this acts this way?
I originally wanted to do offset so I could get double sustained read and get a max read speed near my SSD. (And 850 pro 256gb drive @540mb/s.)
Edit: I can run it in a raid 0 and get closer to some of the numbers I want, but I don't want to run it in raid 0 in case of data loss. and it's still not up to the read speeds in practice. I don't understand what makes it act this way.(literally, I don't know and am curious.)
BTW, what does it mean that my raid drive is located at /dev/md/localhost-live.localhost-live.attlocal.net:10
Does that mean it's on a network or just part of my systems internal name?
Is there also a way to get it to use asynchronous writes. I read it's better and lowers CPU overhead. I think I got it to max out 3-4 cores using a different file system manager or something with raid 0.
I tested with an install folder of warthunder at 33 gigs. If I ran it under Fedora 32 it runs at 3-5 minutes for a copy job. If I do it under the Fedora 25 install usb I get around 1 minute for a copy... This is some serious real world performance impact. What is the difference between these two?
One other oddity is that it seems to normally only max out one cpu core if any. I had one config (i forget which one) was using more.
Do I need to activate a tristate thing on either the hdd's with hdparm or on the bios? edit: I may be thinking of memory tristate. Not sure if that is the same thing or not.
Memclock tri-stating
Determines whether to enable memory clock tri-stating in CPU C3 or Alt VID mode. (Default: Disabled)
Could DQS Training help at all? Or would that hinder?
DQS Training Control
Enables or disables memory DQS training each time the system restarts. (Default: Skip DQS)
So far neither of those did anything that I can tell. I'll try turning off virtualization next.
Last edited: