• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Areca ARE-1170 dog slow

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

pclausen

Registered
Joined
Mar 1, 2006
I recently picked up an ARE-1170 from the Egg (24 Port SATA-II RAID controller PCI-X 64-bit).

http://www.newegg.com/Product/Product.asp?Item=N82E16816151004

I have it setup in a RAID6 config using the following 10 drives:

4 Maxtor 300GB 5400 ATA-100
3 Seagate 300GB 7200.8 ATA-133
2 Seagate 320GB 7200.10 SATA II PRT
1 Seagate 320GB 7200.8 SATA I

I'm connecting the 7 drives with PATA connections using the following PATA to SATA-I converters:

http://www.xpcgear.com/ide2sata.html

I have about 1.4TB loaded onto the raid at the moment (ripped CDs and DVDs) and the performance is so bad that ripped DVDs only play at 1/4 speed or so. Music seems to stream fine but the bitrates are much less (despite being lossless wma).

Is the issue that I'm mixing different drives?

Perhaps the PATA to SATA converters?

My previous raid used a 3ware 7508 controller and 8 of those 300 GB Maxtor 5400 drives. It was screaming fast, even after the Maxtors began failing and were replaced by the 7200.8 Seagates.

I ordered up a 3rd 320GB 7200.10 and plan to setup a seperate RAID5 on the same controller using all 3 7200.10 drives to see if it will solve my speed issues.

My stripe size is 64k, which is also what I picked when formatting the RAID from within XP Pro x64.

Mobo is Asus P5WDG2-WS running a D805 @ 3.9GHz. LAN is GigE (onboard) with jumbo frames enabled.

When I browse files locally, I often have to wait for 10-20 seconds for the directory I just clicked on to open. While I wait, I see the activity light on the 1170 going crazy. If I go back and try to enter the same directory again, the wait is just as long as the first time.

Anyone else using an Areca controller with a mix of PATA and SATA drives? If so, do you have performance issues?

Sorry for rambling, but this is driving me nuts, especially considering what I paid for that Areca controller which is supposed to be top of the line.
 
We have an ARC-1160 in a near-line backup server here at the office. It's using 14x Maxtor 500GB SATA drives (yeah, I told them Seagates, they bought Maxtors) and it performs phenominally well. Linear read speeds are in excess of 700MB/s, writes are about 350MB/s, and seeks are about 14ms. Provided there aren't any obvious problems such as having the card plugged into PCI 32/33 slots, nappy cables, and so on, the thing that raises my suspicions is the ATA-SATA converters. Using a combination of 5400rpm and 7200rpm drives isn't the best idea, but it certainly wouldn't cause it to choke that badly. Neither would an odd stripe size setting. Have you tried something as simple as running HDTach? It's a crappy benchmark, but at least will tell you how the burst speeds and linear xfers are doing. That in turn will show which direction to pursue.
 
Thanks for the advice. Yeah, I too suspect the ATA-SATA converters. I'll grab HDTach and see what my results are.

I should be receiving my third 7200.10 320GB today, so I'll pull the 2 currently in the raid and quick format them on another box, then add all 3 back as a seperate RAID5 and run some tests on it.

If that solves the problem, I can begin copying stuff over from the degraded RAID6 onto the RAID5. The down side of course is having to order up additional 7200.10's in order to have the room to copy over all my data. The upside is a much faster RAID (hopefully) and the fact that I'll be using all 320GB of space as opposed to being limited to 300GB since that is the size of some of my current drives.
 
I ran HDTach and the results look ok:

HDTach1.jpg


Must be some weird driver issue I suppose? I'm runing the latest 64-bit drivers (XP Pro 64-bit OS) and the card is running the latest firmware.

I'm able to copy files to and from my boot drive at really high rates, but accessing the directorys locally is very slow.
 
Try monitoring the disk activity during browse and streaming operations. Use the performance monitor to watch the disk queue lengths, operations per second, and bytes per second. Also run fsutil from the command line to check the NTFS statistics; for all I know it could be a fragmented MFT that's causing problems.

Code:
e.g.:
C:\fsutil fsinfo ntfsinfo d:
NTFS Volume Serial Number :       0xprettyhexnumbers
Version :                         3.1
Number Sectors :                  0x000000000c77611a
Total Clusters :                  0x00000000018eec23
Free Clusters  :                  0x00000000014c0a75
Total Reserved :                  0x0000000000000000
Bytes Per Sector  :               512
Bytes Per Cluster :               4096
Bytes Per FileRecord Segment    : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length :           0x0000000002ab4000
Mft Start Lcn  :                  0x00000000000c0000
Mft2 Start Lcn :                  0x0000000000c77611
Mft Zone Start :                  0x00000000000c2aa0
Mft Zone End   :                  0x00000000003ddda0

C:\fsutil fsinfo statistics d:
File System Type :     NTFS

UserFileReads :        10
UserFileReadBytes :    196608
UserDiskReads :        10
UserFileWrites :       0
UserFileWriteBytes :   0
UserDiskWrites :       0
MetaDataReads :        890
MetaDataReadBytes :    3645440
MetaDataDiskReads :    917
MetaDataWrites :       5
MetaDataWriteBytes :   20480
MetaDataDiskWrites :   10

MftReads :             53
MftReadBytes :         217088
MftWrites :            5
MftWriteBytes :        20480
Mft2Writes :           0
Mft2WriteBytes :       0
RootIndexReads :       0
RootIndexReadBytes :   0
RootIndexWrites :      0
RootIndexWriteBytes :  0
BitmapReads :          798
BitmapReadBytes :      3268608
BitmapWrites :         0
BitmapWriteBytes :     0
MftBitmapReads :       2
MftBitmapReadBytes :   8192
MftBitmapWrites :      0
MftBitmapWriteBytes :  0
UserIndexReads :       27
UserIndexReadBytes :   110592
UserIndexWrites :      5
UserIndexWriteBytes :  20480
LogFileReads :         6
LogFileReadBytes :     24576
LogFileWrites :        30
LogFileWriteBytes :    122880
 
I ended up picking up 5 more 7200.10 320's and setup a fresh RAID6 on the controller using all 7200.10's. This time I did LBA 64 instead of the 'hack' that changes the sector size from 512 to 4k. The hack would have limited to me 16TB anyway, where LBA 64 will let me go all the way to 512TB (once those 24TB drives comes out) :)

The other issue I had was that some of my drives were 300GB and some were 320GBs. There was no way to grow the array to take advantage of the extra 20GB per drive, even after the last 300GB drive had been replaced by a 320GB drive.

Anyway, everything is smoking now with 8 identical 7200.10 drives, all running SATA II.

I dug up an old Asus mobo w/ a PIII 933 and 512MB, and picked up a HighPoint Rocket Raid 2220 and will be setting up a "spare" file server using my 8 left over 300GB drives in a 2.1TB RAID 5 config.

Btw, it took about 8 hours to copy 1.6TB across GigE from my friend's 3Ware RAID5 box onto my new Areca RAID6 box, which is much better throughput what what I got in the past, but not exactly saturating the GigE connection (about 30-35% utilization).
 
Not bad at all. I've noticed at the office that the backups we run to the server with the ARC1160 in it only utilize about 20% of teamed GigE. If the servers had TOEs this would probably go up, as they're becoming CPU saturated trying to perform their duties, run the backup software (compression & block-level deltas), and transfer the data over the network. Our main DB server with about 600GB of data took about 5 hours to do a full backup, though there were several overnight mx jobs that were running on the DBs and the RAID configuration in that server sucks.

Out of curiosity, how did you setup the logical disk in Windows and partition it? On 2k3 we just turn the array into a GPT disk, then put a single partition and NTFS filesystem on it. IIRC the controller is using LBA64, 64k stripe size, and the filesystem is on the default 4k cluster. The MFTs are rather large for a 6TB filesystem.

EDIT: I just remembered that the network topology was different the last time I benched the backup server, so that may have something to do with the lower performance. Don't ask me why, but we had a bunch of GigE switches daisy chained with single links. All the servers are now on a managed GigE switch, so that bottleneck should have been removed.
 
Back