• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Trouble with Intel Rapid Storage Technology on Asus P8Z68-V Pro

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

stuyguy

New Member
Joined
Jul 1, 2011
Hi,

I just built a new rig and am having some weird issues running RAID-5 on IRST. The stats are:

CPU: Intel Core i7-2600
Motherboard: Asus P8Z68-V Pro
BIOS Version: 0501 (latest, came with the board)
RAM: G.SKILL Ripjaws Series DDR1333 F3-10666CL9D-8GBRL (4x4GB)
GPU: EVGA GeForce GTX 560 Ti 1GB (01G-P3-1560-KR)
Power supply: SeaSonic S12II 620 Bronze 620W

I'm running the OS off a random 500 GB hard disk and want to set up RAID-5 with 5x3TB disks (Western Digital WD30EZRS, shows up in IRST precisely as: WDC WD30EZRS-00J99B0).

RAID setup:

OS: Windows 7 Enterprise 64-bit (yes, Enterprise, through my school. Essentially equivalent to Ultimate)

RAID software: Intel Rapid Storage Technology 10.5.0.1027 (from: http://support.asus.com/Download.aspx?SLanguage=en&m=P8Z68-V+PRO&p=1&s=39)
The About dialog says it's 10.5.0.127, even though the Asus website says it's 10.5.0.1026.

Ports 0 and 1 (gray) are SATA 6Gbps, Ports 2-5 (blue) are SATA 3 Gbps.

Port 0: OS installed on Seagate 500 GB disk.
Ports 1-5: 3 TB disk each.
(Yes, I'm running the RAID across the two different port types, which might be part of the problem? I couldn't find documentation saying that it's a problem and nothing prevented me from setting it up this way...)

My problem is, while the array was building, and also now that it's complete, my machine randomly reboots. The time between reboots seems to be uncorrelated with load on the system (sometimes when I'm playing Starcraft II, other times when I'm doing absolutely nothing). I disabled the "automatically reboot upon error" option, but it reboots anyway. After a reboot, when I log back in, I get a little balloon that says "Data on one more more volumes is protected from a disk failure", but IRST shows the array status as "normal", so I'm not sure what the deal is.

A few other potentially useful pieces of information:

0. When building the array, occasionally, it complains that the disk in port 4 (one of the 3 Gbps ones) has failed. Now I've used these disks for weeks in a Linux mdadm RAID without any problems, so I'm pretty sure it's not the disks. I swapped disks 4 and 5 with ports 4 and 5 and the message went away.

1. I thought it might be the RAM. I started running memtest86+ off a USB stick last night and went to bed. When I woke up, the machine has rebooted back into Windows. I tried again this morning and the same thing happened when I got back from work. I suspect that it's the RAID controller freaking out. I will pull out the disks and try memtest86+ again without it.

If anyone has any ideas, I would really appreciate it.

Thanks!
 
first your in so deep now :) how can you troubleshoot the booting problem?
your post makes it sound like you think the booting could be related to the disk controllers and assembly. it could. but what if it isnt?

you have jmicron exisitng on the board, on the red thing, so is it likely that you have the drivers on the system for it?
is it likely that it is turned on via the bios?
I had to do a excessive analisis of the jmicron driver on my system, and it became a FILTER on the other system, jmicron had got into the intel :) I have no idea why they did that. It was not really a huge problem till i tried to fix it , the stupid way, toss the files.
Many times myself and others tried to use jmicron controllers on these boards, and had a lot of troubles ( i still dont fogive them but the worst of it was my fault).

if you are not using that, and have nothing on that, safely and properly remove any jmicron drivers, and disable it in the bios. Me i just hosed their driver out of the system, and completly lost boot, that is where the safely and properly part comes in :)

might not have anything to do with the problem, but that is one thing i can think of.

ways to see it without messing with anything is find a utility on the web that shows "filters" something i never knew about and really didnt want to. filters are stuffed into other areas and inbetween things . That is how perfectally fine systems can have some strange driver like problem, even if the original drivers were perfect.
filters can get stuffed inbetween AV codec stuff, into other drivers, and usually serve some purpose or keep things straight, or provide harware with info it needs, for licencing it can be used to block.
.
 
Last edited:
I just ran Memtest86+ v4.20, with all the hard disks removed, on each of my sticks individually (stick 0 in DRAM slot 0, stick 1 in DRAM slot 1, ...) and also all together. They all passed two rounds of tests, so I declare them "good".

I am not using the JMicron eSATA slots - the drivers are not installed in Windows 7, but I left them enabled in the BIOS.

I think next test is going to be run with just one known good hard drive (no RAID) and play 6 hours of Sins of a Solar Empire and Starcraft II. If that works, then it's definitely the RAID controller.

In that case, I will try running 4x3TB off only the 3 Gbps ports and see what happens.

Thanks everyone! Will keep posting!
 
I think I've narrowed the problem down to my GPU. How you ask? Science!

Step 0:
I left the BIOS in RAID mode, but deleted all the RAID volumes. 1x500GB + 5x3TB disks. 20 minutes into Sins of a Solar Empire, machine reboots. Hmm, maybe it's the RAID controller.

Step 1:

Leave the BIOS in RAID mode, remove all the 3TB disks altogether. 1x500GB disk for OS. 10 minutes into Starcraft 2, machine reboots. Maybe it's an incompatibility with the RAID mode and the drivers?

Step 2:

Set SATA controller to AHCI mode in BIOS. No 3TB disks, just 1x500GB disk for OS. 5 minutes into 3DMark 11, machine reboots! Definitely not the RAID controller or the RAM!

Step 3:

Install NVIDIA System Tools and turn on temperature logging (every 1 second). Run 3DMark 11 again, reboots in less than 5 minutes again. GPU temperature is at 70 C, CPU cores are at 50, 54, 52, 52 C. Definitely not overheating...

Step 4:

Run BurnInTest 6. Test CPU, RAM, 2D Graphics, Video Playback only (no 3D graphics). Passes after a few hours! CPU temperature reaches 66, 62, 66, 64 C, GPU stays at 35 C. Looks like it's the GPU.

Step 5:

Run FurMark GPU test. GPU temperature reaches 80 C and then machine reboots after 5 minutes. Repeat 3x with the same results. It's very likely the GPU.

Step 6:

Remove the GPU, plug monitor into Intel HD Graphics 2000 iGPU instead and install drivers. Run FurMark again: 4 hours later, still running. Looks like I have a bad GPU.

Next step: borrow a friend's GeForce 480 GTX and see what happens. It could still be the power supply?

Btw, nothing is overclocked. Everything in the BIOS is on "Auto" and the RAM timings are at the default "CL9-9-9-24".
 
Run FurMark GPU test. GPU temperature reaches 80 C and then machine reboots
Ouch.

can you use some gpu tool and make yourself a nice new FAN graph that keeps the gpu temps at one temp?
 
Ok, it's not the GPU. I just swapped in a known working GeForce 480 GTX (from another computer that's been stable for over a year). Same symptoms: under graphics load (3DMark 11 or FurMark), the machine reboots itself.

The only things I can think of are power supply and motherboard.
 
Science FTW! Swapped in a different power supply (Enermax 625W this time) and FurMark has been going strong for 38 minutes now maxed out. My GPU (560 Ti) holding steady at 80 C.
 
Back