• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

3 X Raptor 150 in Matrix Raid 0 - Results and Information

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Can you show me where regular Raid can achieve 350Mbs on read & write? I was/am unaware that regular raid can do that well.
It can, can't it? You just can't measure it since benchmarks can't choose a partition or choose to benchmark a only a particular section of the drive. If you do a matrix array or create a standard RAID0 array on an ICHxR chip and install your OS and programs which take up, say, 10GB, you've only used the outside 5GB of each platter. Won't you then get the same performance from the hard drives? 6 of one, 1/2 dozen of the other...............
 
The last-best bench I got with my 2x74g Raptor "regular" raid O 16k stripe was 215Mbs on NV Raid. That was the best I had ever gotten. And, until Intel Matrix, I had not seen any higher.
 
If some of you still have doubt that Intel raid is just another "software ONLY" gimmick, read my post here.

I believe I've been testing "intensively" at "plain Raid 0" on windows server version vs plain raid 0 by Intel ICHxR, forget that bout matrix thingy with mixed type of raid arrays, still Intel beats the hell out of software based raid.
 
krag said:
Can you show me where regular Raid can achieve 350Mbs on read & write? I was/am unaware that regular raid can do that well.

Sure. Pile 6 7200.10's onto any RAID0 implementation that's not crippled by bandwidth issues (such as being a 32 bit 33 MHz PCI card) and you'll hit 350 MB/sec and change on the outside (each drive does ~80 MB/sec on the outside IIRC and RAID0 STR scales more or less linearly).

krag said:
The last-best bench I got with my 2x74g Raptor "regular" raid O 16k stripe was 215Mbs on NV Raid. That was the best I had ever gotten. And, until Intel Matrix, I had not seen any higher.

If you're reading 215 MB/s you're not benchmarking properly. A Raptor peaks at ~85 MB/sec on the outside of the disk, so it's simply impossible to have a sustained read/write rate faster than this. What you may be seeing is the burst transfer number, which is completely meaningless.

RAID0 implementations are very trivial - there's not much a driver can do to gain performance. From what I've measured (with a fairly decent range of drives and cards from old 7200 RPM ATA drives to 15000 RPM SCSI drives), Windows' RAID0 implementation is as good as or better than other software implementations. The main downside to using Windows' spanned volumes as opposed to driver-based software RAID is that Windows cannot boot off a spanned volume.

Other RAID levels are a little more complex though.

"Matrix RAID" is nothing more than the name that Intel gives the option to split a physical set of drives into multiple RAID arrays. This is a fairly standard feature on hardware RAID cards, but for whatever reason is not implemented in many software RAID products (such as Silicon Image or chipset RAID). The OP used Matrix RAID to make two RAID0 arrays each containing one partition. This is identical in performance to using a single RAID0 array (ie: not Matrix RAID) and partitioning it. The only downside is that HDTach (AFAIK) does not support benchmarking at the partition or volume levels, only at the device level.
 
6 x 7200 RPM drives is not normal....:-/

An answer to your other reply....I did not say anything about sustained R/W. *edit...it is not impossible for raptors to put out more than 85Mbs...thats why we use raid O!

If you have used HDtach before you would understand the score.
 
Last edited:
krag said:
6 x 7200 RPM drives is not normal....

OK. Show me any two-drive RAID that can do 350 MB/sec. You pick the drives, the card, system, everything. Solid state drives are allowed.

edit: In hindsight, I should limit the interface to SATA, SCSI, and SAS. I'm sure there's got to be some place out there that makes a SSD with a custom interface card (or 10Gb FC) that can do 1GB/sec with a single drive ...

krag said:
An answer to your other reply....I did not say anything about sustained R/W.

Well, you quoted a number in MB/sec so I assumed you were talking sustained speeds. If you believe numbers like burst speeds have an impact on performance, then I've got a smokin' fast 4.3 GB SCSI drive that does 1100 MB/sec that I can sell you for the low low price of $100 ...

krag said:
If you have used HDtach before you would understand the score.

I have used HDTach extensively. Are you quoting the burst speed, maximum sustained transfer rate, minimum transfer rate, or average transfer rate? Maximum or average sustained transfer rates are the number usually used from HDTach, and in the case of small RAID volumes at the start of the disk, these are more or less the same. Hence I assumed one of these. If you are going to use nonstandard numbers please say where you are getting them from to help avoid confusion.

Regarding the Intel "is it software" thing - I can say with nearly 100% certainty that there is no RAID0-assisting stuff on the northbridge. How? Simply because there is no way to assist RAID0 in hardware short of adding cache and even that has very little effect. Additionally, in the ICH7R there appears to be no RAID1 assisting stuff (essentially dispatching a single write request to both drives) since IIRC there is a slight performance decrease in write-intensive situations. I'll try and dig up the site this was from (it was a while ago obviously). Finally, there is no RAID5 XOR engine in there either, as can easily be seen looking at CPU load for RAID5. The ICH8R may have RAID1 assistance - I haven't seen any benchmarks to suggest either way and haven't had the chance to test it myself.
 
Last edited:
Its pointless to argue with you any further. I have better things to do with my time. Maybe you can find soemone else to argue with you? Have a good day...:)
 
Yes, the OP (that would be myself) chose to use a double Raid 0 configuration rather than use the Matrix Implementation of a Raid 0/ Raid 1/5 array (which is Intels great idea)on the same physical drives. Quite frankly I do not need data mirroring solutions on my system.

I guess, you do bring up a valid point because theoretically a small boot partition on the array (vs. creating a matrix slice) would read/write and access from the quickest portion of the platter on all raided HDD's. So technically a plain (non matrix) 3 drive Raid 0 would be exactly the same as creating a 3 drive slice on the matrix. But, my honest opinion experience proves otherwise.

I do agree with the previous posters based on my experience with raided hard drives in various configurations and different controllers. In terms of synthetic performance and sustained transfer rate, the matrix array produced 30 MB/s more in HDTach and about 60 to 70 MB/s more in the upper portions of ATTO Diskbench than the exact same WD150 Raptors X 2 on NVRaid (Plain Raid 0). That would be a comparison of 2 X WD150 Raptors on my old NVRaid setup vs 2 X WD150 Raptors on my Matrix Setup. This was achieved by simply creating slices on the Matrix (rather than partitions) and enabling write back caching through Intels control panel.

My own honest opinion is also that in terms of real world performance and "snappy feel" of the XP environment that Raid 0 using the Matrix is much more noticeable than Raid 0 using NVRaid or another similar (Plain Raid 0) controller/software.

So you bring up good questions that definitely need more research:

1. Is a standard "plain raid 0" 20GB boot partition the same as a "Matrix Raid 0" boot slice?

2. Does Intels Matrix Write Back Caching produce any real world performance gains over "plain raid 0" disregarding the proven synthetic benchmarks that I posted in HDTach and ATTO?

finally the most important question:

3. Are there any gains or benefits running the Matrix with a dual slice Raid0/Raid0 if you do not plan on using the Raid0/Raid1or5 implementatin that Intel designed the matrix for?

These are good questions and I appreciate everyones feedback. Maybe someone else will chime in with opinion.
 
dominick32 said:
rather than use the Matrix Implementation of a Raid 0/ Raid 1/5 array (which is Intels great idea)on the same physical drives.

Just a small point ... the idea is far from new. Hardware based cards (and people using hardware based cards) have been doing this for years. Just for some reason it's never been implemented in chipset software software RAID yet.

dominick32 said:
I do agree with the previous posters based on my experience with raided hard drives in various configurations and different controllers. In terms of synthetic performance and sustained transfer rate, the matrix array produced 30 MB/s more in HDTach and about 60 to 70 MB/s more in the upper portions of ATTO Diskbench than the exact same WD150 Raptors X 2 on NVRaid (Plain Raid 0). That would be a comparison of 2 X WD150 Raptors on my old NVRaid setup vs 2 X WD150 Raptors on my Matrix Setup. This was achieved by simply creating slices on the Matrix (rather than partitions) and enabling write back caching through Intels control panel.

FWIW, ATTO is almost completely useless for measuring performance as soon as you have cache somewhere along the line.

With regard to the HDTach numbers, the key thing here is that you had the WB cache enabled for the Matrix RAID test. Although WB cache should (in theory) have no effect on read performance bing showed that it did in one of his other threads. I remember in earlier Matrix RAID implementations there being something about it being for RAID5 only. I'm not sure what it says now (in the main window or helpfile) but obviously it changes something for the RAID0 arrays.

One thing that it could be doing is full-stripe reads when it has WB caching enabled, and non-full-stripe reads with it disabled, which would make some sense. Doing this should hurt it's performance in an IOMeter run, so it'd be interesting to see. Alternatively, HDTach could just be tickling the Matrix RAID driver slightly wrong when it's not in WB caching mode. Particularily with caching hardware cards, I've found that HDTach's measurement methods do not work well. h2bench generally provided a more consistant/logical number, though in some cases I had use my own STR benchmark and tweak it for the specific card/RAID type to get it working well. I would do some more investigation, but the computer I was using has ascended to the Linux plane of existance, which kinda stuffs up the ability to use the Windows drivers ...

dominick32 said:
1. Is a standard "plain raid 0" 20GB boot partition the same as a "Matrix Raid 0" boot slice?

You probably wont't be able to measure this using vanilla HDTach or most other disk benchmarking software since they tend to operate on a device level. I've written a thing that makes partitions appear as devices (required for testing Windows' RAID) that I can clean up for public consumption if you want. It'd also be good to get a comparison with Windows' stripe volumes as well.

dominick32 said:
2. Does Intels Matrix Write Back Caching produce any real world performance gains over "plain raid 0" disregarding the proven synthetic benchmarks that I posted in HDTach and ATTO?

I think also some trials need to be done using other STR benchmarking tools - Winbench and h2bench use different methods to HDTach and often get different results.

dominick32 said:
finally the most important question:

3. Are there any gains or benefits running the Matrix with a dual slice Raid0/Raid0 if you do not plan on using the Raid0/Raid1or5 implementatin that Intel designed the matrix for?

Indeed.





krag said:
Its pointless to argue with you any further. I have better things to do with my time.

Like making other vauge posts with benchmark number coming from nonstandard places resulting in impossibly high results? OK, go have fun ... :p

Seriously though, if you do make a post when you use a nonstandard method (or if the "standard" method is ambiguous) then say eactly what you did and what number you are reporting. The "standard method" for getting a MB/sec number from HDTach is to use the average sustained read speed. Since 215 MB/sec average (or even maximum) sustained read speed is impossible to get with two Raptors, you obviously did not report these numbers.

krag said:
edit...it is not impossible for raptors to put out more than 85Mbs...thats why we use raid O!

Sorry, I forgot to expand the start of that paragraph. It is impossible for a *single* Raptor drive to have a STR of more than 85 MB/sec (give or take a bit for the pedantics who precisely measure it to be 86.41532 MB/sec or whatever). Using RAID0 does not increase the performance of the component drives. Since the STR of two drives cannot be more than the sum of the STR of the two individual drives, the maximum STR for two Raptors (no matter how they are hooked together) is 170 MB/sec.
 
emboss said:
Particularily with caching hardware cards, I've found that HDTach's measurement methods do not work well.


Then does HDTACH give higher or lower values with cards that have Cache?

I have an Adaptec 2200s/64 controller that seems to hit a ceiling with 2 or more(3-4) Fijitsu MAS3184 SCSI drives (138MB/s burst and 70MB/s sustained) and that is not to far off from the single drive benchmark. I have been very ticked off about the benches and I am considering going to matrix raid with some WD1600JS's
 
hcforde said:
Then does HDTACH give higher or lower values with cards that have Cache?

For sustained speeds, it reads low. For burst speeds, it reads high (measures cache<->system bandwidth instead of drive interface bandwidth). But then burst speed never really means much anyhow ...

hcforde said:
I have an Adaptec 2200s/64 controller that seems to hit a ceiling with 2 or more(3-4) Fijitsu MAS3184 SCSI drives (138MB/s burst and 70MB/s sustained) and that is not to far off from the single drive benchmark.

Your problem is the RAID card. It uses an Intel 80303 (aka IOP303) which tops out, as you have experienced, at around 70-80 MB/sec. And will completely die if you throw RAID 5 at it. If you're looking for STR speeds with the drives you've got, youll really need an IOP321 or IOP33x based card which wil set you back a bit ...

On the other hand, unless you have an application that needs to be able to stream to/from disk very fast without seeking (basically limited to A/V work), you'll find your SCSI array will easily beat a 7.2K-based Matrix RAID array. Or probably will anyhow ... Adaptec cards were never particularily great performers.
 
I did not read this entire thread yet, but I find this to be very interesting since I have also run 3 Raptors on Matrix RAID(older 74gb raptors). Dom your access times don't make any sense too me. You should be getting faster access times. I managed to dig up one of my old screenshots



Now that screenshot is on a 10gb slice. But keep in mind that was done with the old original 74gb raptors(8mb cache). I think something is wrong here. You should be in the 4.4 ~ 5.5ms range with your setup.
 
I did not read this entire thread yet, but I find this to be very interesting since I have also run 3 Raptors on Matrix RAID(older 74gb raptors). Dom your access times don't make any sense too me. You should be getting faster access times. I managed to dig up one of my old screenshots



Now that screenshot is on a 10gb slice. But keep in mind that was done with the old original 74gb raptors(8mb cache). I think something is wrong here. You should be in the 4.4 ~ 5.5ms range with your setup.

Please use HDTACH before we jump to any conclusions. But if true, this is great info bud!!! :beer: However, i did not measure my access times with HDTUNE, perhaps HDTUNE produces faster synthetic access times???

I need to see an HDTACH shot first of the exact HDTUNE drive setup otherwise we do not have a valid comparison.

Regards,
 
Sorry but even though I remember running HDTACH I cant find the dang screenshot. If you have the drives on a windows XP machine then maybe you could run HDTUNE and see what it does. HDTUNE doesn't work with Vista last I checked. It's highly possible you are right though and it is just a matter of the software.

I just ordered 5 of those 150gb raptors last night that I will be running on ICH9R so I will be doing some more testing of my own.
 
Pssttt... new HDTune 2.54 (3 September 2007) HERE

Changes log:
Added option to change block size
Improved compatibility with Vista
Fixed problems with Fahrenheit temperature display
 
Sorry but even though I remember running HDTACH I cant find the dang screenshot. If you have the drives on a windows XP machine then maybe you could run HDTUNE and see what it does. HDTUNE doesn't work with Vista last I checked. It's highly possible you are right though and it is just a matter of the software.

I just ordered 5 of those 150gb raptors last night that I will be running on ICH9R so I will be doing some more testing of my own.

I would love too, but I am currently on a single raptor 150 setup. I sold a lot of my rig, earlier this year. Dont know if you remember. But, I would love if you could dig deeper on this issue for me. Because it definitely made me raise my eyebrows a bit. lol
 
My results (3 x 36Gb Raptor setup using Vista Ultimate 64-bit) are mirrorring (intended pun lol) yours Dom, so I hope we can work out what Hyper did that was different.

http://www.ocforums.com/showpost.php?p=5209683&postcount=918

The couple of posts after it show HDTach & Atto fwiw.

edit... Thanks bing for the new version alert... unchanged pretty much for me though...
HDTune2.543raptor.jpg

Hyperasus... I wonder if the difference is that you were running a 10Gb slice, Dom a 20Gb slice and me a 49Gb slice?
 
Last edited:
It would make sense that raptors are more sensitive to how large of slice you use. They are much small drives so even though they are all around faster, the difference between the fastest part of the drive and the slowest part of the drive isn't near as dramatic as a large 7200rpm drive. Soon as my drives get here I will do a 10gb slice over 5 drives and see what kind of numbers it pulls off.
 
It would make sense that raptors are more sensitive to how large of slice you use. They are much small drives so even though they are all around faster, the difference between the fastest part of the drive and the slowest part of the drive isn't near as dramatic as a large 7200rpm drive. Soon as my drives get here I will do a 10gb slice over 5 drives and see what kind of numbers it pulls off.

Sounds logical. Your 4.5ms seek... was that with 3 x 150Gb drives? edit... I see it was the 74Gb ones and 3 of them - can you run a 10Gb slice on 3 x 150's before you set up your 5 x 150's maybe?
 
Back