Dell Perc 5/i Throughput Benchmarks

Add Your Comments

About this card:

This is a PCI Express based RAID card made by LSI for Dell. It is normally found in their PowerEdge servers. The RAID card will fit in any PCI Express video card slot that is physically and electrically 8x. There are two SFF-8484 connectors at the top to give a total of eight SATA or SAS hard drives. There is also a DDR2 slot that is used for the “Write Back Cache”.

Why this card and why benchmark it?:

RAID cards are normally many of hundreds, if not thousands of dollars for a good one. This Perc 5/i can be had for anywhere from $75 – $125, which may even include the BBU (Battery Backup Unit) and breakout cables. This makes the card an extremely good choice for anyone that doesn’t want to spend a lot of money on a good RAID card. To get the best performance, the individual settings are extremely crucial. This requires quite a bit of time and thought to get it right; I’ve done both of those for you.

NOTE: In my tests, I enabled “Force write back” which is EXTREMELY DANGEROUS to data integrity without a BBU installed and working on your system. I did this only because I don’t have the required hardware (BBU) to enable this feature normally and I have absolutely no data to lose on these hard drives. DO NOT use the “Force write back” option when using the system for storage.

Modification needed for desktop systems:

Since this is a card originally made for Dell servers, there are some obstacles that we must overcome. The first is the most obvious, cooling. The servers that these came from where specifically designed to cool this card. Most desktop systems will have no where near the airflow these cards need to stay cool. The stock heatsink is extremely small and has little surface area. Many people have modded this card and there really isn’t a wrong way to do it as long as it stays cool. Here is how I modded it.

Click the image to open in full size.
Click the image to open in full size.

I ripped the stock retention bar off the stock heatsink (the black heatsink in the above pictures) and used it on the Zalman northbridge heatsink to secure it. I then used some 3M thermal tape to secure the stock heatsink to the other chip on the card. With this combination, I can run two Perc 5/i cards in a normal SLi/Crossfire based motherboard. Cooling wise, I just use the normal case fans with no modifications, the heatsink is large enough that it does not need a ton of airflow to stay cool. If you need the space, you can either use a smaller heatsink or “fan” out the pins of the heatsink. You will also want to add extra cooling if you do this.

The other important modification is a trickier one. Most motherboards will have a conflict with the power management of the card, either causing the motherboard not to POST or for the card to not be detected. There are two pins on the card that need to be covered up. There are multiple ways to do this. If you want a more permanent solution, you can use an adhesive (finger nail polish, etc). If you want a temporary solution, you can use electrical tape. The only issue with tape is it moves easily if you need to install or remove the card frequently.

There is a very good thread here that shows how to modify the pins. It is listed under the “SMBus Issue with Intel Chipsets”. I had to do this mod on my motherboard and it is not an Intel based one. Be prepared to do this mod on any board.

Testing Methodology:

To properly test this card, we need all variables to stay the same except a single, controlled one. Here is a breakdown of the hardware that I used for these tests:

Motherboard: Asus M2N32-SLi
CPU: Phenom X4 (AM2) 2.6GHz
HDD:
—-320gb Seagate (Operating system drive)
—-7x 1tb Hitachi and 1x Western Digital Green (Perc 5/i)
Raid controller: Perc 5/i PCI-e
PSU: Corsair HX620
Case: Norco 4020
OS: Windows 2003

Link to server pictures (1)
Link to server pictures (2)

I will be using a multitude of programs to determine what settings are best as no one program will give definitive results. The first program I will use is the “ATTO” benchmark to show how each option changes the performance of each RAID level and at what file size. The second I will use is HDTune Pro to test random access, I/O per second and how fast it is with different file sizes.

Program: ATTO Disk Benchmark
Transfer Size: 0.5 to 8192kb
Total length: 256mb
Direct I/O: Enabled – Overlapped I/O
Queue depth: 4

Program: HDTune Professional
File length: 512mb

The hard drives were put in each specified RAID level with a size of 200gb, initialized in Windows, converted to a GPT disk and formatted with NTFS (quick). After that, the test was launched and started. Screenshots were taken after each benchmark. The RAID was then deleted from the RAID array and recreated with the new settings to prevent settings from not changing between tests. The server was NOT restarted between tests and I used the Dell OpenManage software suite to configure the Perc 5/i from a web browser.

The RAID levels used are 0, 5, 10 and 50. There will be a total of 6 tests per RAID level. Each “Read ahead” setting will be tested along with “Write back”. To enable “Write back”, I had to use the “Force Write back” option since I don’t have the BBU (Battery Backup Unit); a built in safety mechanism that you can override. Stripe size was kept to a constant 64kb throughout all tests. Here are the settings used for each RAID level:

Read Ahead – Write Back
Adaptive Read Ahead – Write Back
No Read Ahead – Write Back

Definitions:

Adaptive Read Ahead – Adaptive read ahead is a read policy that specifies that the controller begins using read-ahead caching if the two most recent disk accesses occurred in sequential sectors. If all read requests are random, the algorithm reverts to Non read ahead; however, all requests are still evaluated for possible sequential operation.

Read-Ahead – A memory caching capability in some controllers that allows them to read sequentially ahead of requested data and store the additional data in cache memory, anticipating that the additional data will be needed soon. Read-ahead supplies sequential data faster, but is not as effective when accessing random data.

Write-Back – In write-back caching mode, the controller sends a data transfer completion signal to the host when the controller cache has received all the data in a disk write transaction. Data is written to the disk subsystem in accordance with policies set up by the controller. These policies include the amount of dirty/clean cache lines, the number of cache lines available, elapsed time from the last cache flush, and others.

Write-Through – In write-through caching mode, the controller sends a data transfer completion signal to the host when the disk subsystem has received all the data and has completed the write transaction to the disk.

Definitions pulled directly from Dell’s own website here:
http://support.dell.com/support/edoc….htm#wp1044435

RAID 0 Results:

Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

Adaptive Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

No Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

RAID 5 Results:

Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

Adaptive Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

No Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

RAID 10 Results:

Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

Adaptive Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

No Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

RAID 50 Results:

Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

Adaptive Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

No Read Ahead – Write Back

Click the image to open in full size.

Click the image to open in full size.

Conclusion

The different RAIDs: The goal of this test was not to see what the hard drives themselves were capable of, but instead the throughput capabilities of the controller. The RAID 0 and RAID 5 tests show similar read/write rates, indicating that the RAID card itself isn’t being limited. We start to see a slight decline in performance when switching to RAID 10 and RAID 50. Also note, when running RAID 10/50, the entire drive must be used; it will not allow you to use a portion of the drive.

Cons:
There are a few cons to the card, they are;

  1. The cables require the SFF-8484 style. Not the normal sata cable!
  2. The modifications needed to keep it cool.
  3. Initialization time for the array.

Cables run anywhere from $10 to $20 per SFF-8484 port. You need to know the length you need before purchasing them, which was difficult for me as I hadn’t bought the server case yet. The modifications required to make the card run cooler was not a negative for me, but I know there are some of you out there that just want to plug it in and have it work. The chances of the tape moving on the pins was high when removing and installing the card, which got frustrating at times. One of the major flaws of creating a RAID 5 array is that it will take over 8 hours to complete, and around 3-4 days for the RAID 10/50 to finalize. While that seems like an extremely long time, when doing the parity calculations for RAID 10/50, it was averaging around 40-50mb/sec. Do that across 8tb worth of drives and the times skyrocket.

Suggested settings: If you have the battery backup unit, be sure to use “Write back”. Without that feature enabled, I was seeing an average of 25 to 40mb/sec in the RAID5 array; substantially less than what you see above. I highly suggest picking one up if yours didn’t come with one. I also suggest using “Adaptive Read Ahead” or “Read ahead” as it will speed up transfer rates during real use (not benchmarks). The only difference is Adaptive Read Ahead will detect when files are not sequential and disable itself while still monitoring for sequential files.

Suggested RAID settings: With everything taken into account (performance difference, initialization, space loss to redundancy, RAID limitation), I would suggest running a large RAID 5 array with a hotspare. I can lose up to two drives (not at the same time) and have the array work and it allows me to keep the speed and space of the RAID 5 array.

Bottom line: Any way you put it, you can’t get any other RAID card of this quality for this price. If you see one in the $75 range and it includes the PCI bracket, grab it. If it comes with cables or even a BBU, don’t even think about skipping that deal.

Thideras

Leave a Reply

Your email address will not be published. Required fields are marked *

Discussion
  1. My speeds using ATTO for my 8TB RAID 5 Perc 5/I setup was as follows:
    0.5KB W:3295 R:35200
    1.0KB W:6622 R:75812
    2.0KB W:13180 R:161462
    4.0KB W:35397 R:320659
    8.0KB W:53816 R:516866
    16.0KB W:118870 R:716144
    32.0KB W:262272 R:1031663
    64.0KB W:379946 R:1079332
    128.0KB W:517196 R:1605027
    256.0KB W:526091 R:2078801
    512.0KB W:619943 R:2610718
    1024.0KB W:627185 R:2314098
    2048.0KB W:576591 R:3087007
    4096.0KB W:557948 R:3221225
    8192KB W:527637 R:2797590
    I am very happy with this card and the performance it gives.