RAID Cards – Lies, Damned Lies and Benchmarks

This started off as a comparison between the IWill Side RAID66 and the hacked version of the Promise competition.

However, it quickly dawned on me that the real story was not what was being measured but the measurements. I’ll compare the cards in Part II, but the big lesson to be learned is not which card is better,
but rather the danger of using a benchmark or two to determine what you buy.

Imposing Simplicity When There Isn’t

We like to keep things as simple as possible. We’d much rather consider just one or two factors rather than ten or twenty.

Unfortunately, this desire often leads us to oversimplify and crunch everything down to a number, then assuming that that single number tells us everything we need to know. Some things just can’t be crunched down to a single number and retain much or even any validity for a specific situation. This is one of them.

For example, you’ve probably had an IQ test at one time or another. Imagine you take a test for 5 different IQs – for example, one for math ability, one for verbal etc. Now Steve Numbers does tremendously in math and terribly in the rest. Ann Average does OK in all of them. When you clump the numbers together and compare the two, Steve rules.

If the only thing you go by is that benchmark, are you going to be happy with Steve if you want to hire him as an editor? No matter how much better Steve’s overall score was than Ann’s?

That’s just what you’re doing if you judge based on a benchmark like High End Disk Winmark. If you look at High End Winmark numbers for Win2000, find them much higher than those for Win98, and decide that this is the OS for you to load up big audio files for sound editing, you just hired Steve, who can’t write a coherent sentence, as your Chief Editor.

That’s the major problem with benchmarks as they are often used. Using just one or two numbers to judge an entire situation carries with it the assumption that if one thing is generally better, it will be better at everything. Sometimes that is true. In my testing and review of other benchmarks, it almost never was.

Clumping everything together is like clumping together those IQ tests. Sure, you get a single average number, but does it really help you find your Chief Editor?

Unspoken Assumptions

A benchmarker often takes a range of activities, and places relative values on those activities. Those may well not be the values you would place on those activities, based on what you do. The guy making up that IQ benchmark might give math a weight of 10%, and everything else 90%. You may well give your favorite thing a weight of 90%, and everything else 10%. If that’s the case, it’ll almost be pure accident if that benchmark points you the right way.

For instance, the IDE RAID cards certainly move large blocks of data faster than single drives. No doubt about that. However, you then have to answer two questions before you can judge how important that fact is to you:

  1. Just how often do you move large blocks of data around?
  2. If you do, how much improvement can you expect when you move your large blocks of data around in your specific activities?

Benchmarks can’t answer the first question, and as we’ll see, they don’t answer the second one too well, either. For one thing, in some test, I found out that size matters, and just because the equipment can handle the little ones doesn’t mean it’s as good handling the big ones. 🙂

Random And Not-So-Random Variations

If a comparison between two items is within a couple percentage points, odds are the difference falls within the margin of error. You can’t say one is better than the other. As you’ll see, much of the Promise/IWill testing is like that, a 1-2% difference. The results I got from each card varied more than that; the difference is meaningless.

Even if they don’t, that doesn’t necessarily mean than one piece of equipment with better general numbers is better for you for what you do. I saw reviews using different poorer drives than my IBM 22GXPs, but sometimes they’d do better in a particular test anyway. Sometimes I would do better than IBM 34GXPs. Compare my results to those from a different brand of hard drives in roughly the same class, and I would do significantly better (often 10% or more, occasionally 20% or even 30%) in some tests, and they’d do significantly better in others. I even saw a couple benchmarks from users where I did better than a couple SCSI RAID setups with 10K rpm hard drives. (Not usually, though.:))

Even if you are lucky enough to have a benchmark test your particular application and shows that one piece of equipment is better than another, don’t assume that will be the case for what you do with the application. As you’ll see, just because one card does better than another generally doesn’t mean it will always do better.

Tampering With The Evidence

I’m not talking about deliberate tampering. Rather, I’m talking about unknowingly doing something that serves to rev things up.

ZdNet make up a series of benchmarks. I used a number of them in testing. I used Winbench to test disk performance, and Winstone and Content Creation to test how much faster hard drive speed affected a system benchmark.

I got some big differences in some of the testing, and it took me a while to figure out why. What I found was the Viagra for Winbench. Both Winstone and Content Creation pumped up Disk Winmarks a lot, and that effect only went away if I doused it with the cold water of some other Winbench tests. If I didn’t dowse, the effects lingered.

Single IBM 22GXP DriveWinbench before running WinstoneWinbench after running WinstoneWinbench after running Winstone and running Disk Transfer/Access
Business Disk Winmark414054304030
High End Disk Winmark138001700013900
AVS/Express9100165009000
Front Page 98889009870084400
Microstation136002210013200
Photoshop692073707290
Premiere137001550013400
Sound Forge279002750027800
Visual C++170001710017000

Let’s look at a few of these more carefully

BenchmarkPre-WinstonePost-Winstone Difference 
Business Disk Winmark4140543031.2%
High End Disk Winmark138001700023.2%
AVS/Express91001650081.3%
Microstation136002210062.5%

Quite a difference after you run a benchmark that claims to “restore your system after testing,” wouldn’t you say?

If you keep running Winbench, the effect erodes, but even after running it four straight times, you still have some scores significantly higher than pre-Winstone. The AVS/Express score remained well over 30% higher.

Even worse, look at this:

BenchmarkPre-WinstonePost-Winstone Score after five Winbench runs 
Photoshop692073707290

The improvement on this test never went away, even after repeated reruns.

Nor is the Winstone Effect just a single drive or Win98
phenomenon either; somewhat less effect occurs when using RAID or Windows 2000.

The point to all this is not to say that other reviewers regularly screw up (this did not seem to have happened in the other RAID reviews I’ve seen) or to automatically assume that an unusually high score is due to this. It’s to point out that benchmarking isn’t as bulletproof as the benchmarkers would have you believe.

For that matter, after some of the strange things I’ve witnessed, I wouldn’t be surprised if some of you tried the same thing (please do and send me the results) on different brands of hard drives and didn’t find that, or found something even worse. There were so many other tests where drives you would have thought were similiar showed much different results.

When comparing my results with those using other drives which were fairly close to the 22GXPs in performance if you used the benchmarks in Storage Review for comparison; I sometimes got much different results in all kinds of tests. I even saw to my amazement completely different results in certain synthetic tests for MY OWN DRIVES.

Granted, I can see some possible reasons for somebody’s newer 34GXPs having 30% slower access times than my 22GXPs, or why my throughput scores were much more stable than many other results I saw out there. But I can’t really explain why someone using the same drives and RAID controller (and a slower processor/bus speed) start off much faster than I, or why my throughput looked almost flatline but some of theirs looked like one continuous heart seizure.

Wouldn’t be so bad if I were the only one with wacky numbers, maybe I did something wrong, but some other got results like mine, too.

Then you have changes I can’t think of any reason for.

For instance, on Saturday, after seeing some outrageously high Winbench scores (higher than I was getting with RAID!) in a review of ATA66 PCI controllers, I decided to try both cards in that mode to see if running in non-RAID ATA66 mode really made a difference. I ran a quick Winbench on both, saw little difference, and thought that was that.

I was doing some web browsing, concentrating on reading some RAID-related posts in a forum, when I saw there were beta drivers and BIOS for the Promise card. I decided to go the extra yard, and loaded both.

Ran a quick Business Winmark: 40% improvement, even more than the Winstone Viagra. I said, “Uhh?” then graduated to four-letter thoughts.:)

Seeing an additional day of benchmarking before my eyes, I ran the Winmarks with cold water tests again immediately, number jumped right back.

Now unless the Winstone Effect survives FDISK and FORMAT, it wasn’t to blame, none of those bad boys had ever even been on the hard drive after formatting. Then what? Damned if I know.

The Fog Of War

Benchmarking usually take place under tightly controlled circumstances, in order to be able to exactly replicate conditions for comparison. This is all well and good, but do you restart after every action you take? Of course not. You do a variety of things, very untidy from a benchmarking perspective, but nonetheless typical. I wondered just how well these systems would work if you added a tiny little bit of chaos to the system, and the results are interesting.

Part II will illustrate all this by comparing the two cards, using both my benchmarks, and referring to numbers generated for different setups elsewhere in other reviews.

Email Ed


Background

  1. RAID has long been in use in SCSI systems to provide increased disk throughput and/or automated data backup. These controllers were (and are) intended for high-end workstation or servers to perform certain specific tasks. Both the RAID controllers and the SCSI drives fueling them were (and are) very expensive.
  2. Last year, Promise introduced the Fasttrak and later the Fasttrak66 IDE RAID card. While nowhere near as elaborate as the SCSI setups (a limited number of drives, no memory cache on the controller), they were also nowhere near as expensive, and do remarkably well against the SCSI setups from a cost/benefit perspective.
  3. When the Fasttrak66 card came out, people commented how similar it looked to the Promise Ultra66 card. Some enterprising soul discovered that they were essentially the same thing outside of a resistor and solder or two, which was easily remedied. Soon, many enterprising souls were buying $25 Ultra66 cards and turning them into functional equivalents of the now $85 Fasttrak 66s.
  4. A couple months ago, IWill came up with its version of a RAID card, and now it looks like everyone and his brother is coming up with one, too.
  5. RAID doesn’t help everything, or even most things, when it comes to performance. It won’t make you boot faster (matter of fact, you’ll boot much slower, figure at least another thirty seconds for either card), it won’t make most applications or games load any faster, either (though in the case of games, it may help load new levels quicker). For performance, RAID is really good with really big single files. Video and audio files are the kinds of files that RAID will help you with.

Also see my initial IWill RAID review HERE for more discussion on the types of RAID and how it works.

Comments on installation

Since I don’t want to reinvent the wheel, there are plenty of reviews out there that will tell you what comes in the box and general installation instructions. All I have to say is both cards
can be a little quirky to install and upgrade.

The IWill has a BIOS upgrade that demands that you identify one of five flash BIOSes (like you memorize these things when you glance at the card). What was even more exciting for me was to find out
that my particular BIOS didn’t exactly match any of the options. I went with the closest one, and went 50-50; it failed for version 1.04 (but still worked) and succeeded in version 1.05. No matter what the version, it really liked calling my first hard drive
UDMA2 rather than UDMA4.

The Promise card is pretty finicky as to whether an array it creates is bootable or not. In theory, you’re supposed to hit the space bar to designate an array as bootable (that’s in the manual, not on the screen), but it seemed to decide that when it felt like it.

The Promise card really is a jealous mistress though; it really doesn’t want to see any other hard drives around it can’t control. If you don’t have a BIOS setting like “boot first from SCSI,” I’d think long and hard about trying one of the other boot options, but if you do, just remember to use it, because it won’t work if you don’t.

If you have another SCSI controller, save yourself time and make sure either the Promise or IWill controller is in a PCI slot ahead of the SCSI controller. They want to go first.

If you have the Promise card, read the next section on what you may need to do to properly format your striped hard drive.

Maybe this was just my system, but initially, installing both cards paralyzed my ability to boot off a floppy. The floppy would start to boot, go for about five seconds and just die. In both cases, eventually it stopped doing that, but don’t ask me why. If it happens to you, you didn’t kill your computer.

Comments on features

The Promise card gives you some flexibility in implementing a RAID solution. The IWill is more a “one size fits all” type of card that makes the decision for you.

The Promise card lets you choose your stripe size; the IWill card does not. For a full discussion and graphs of the effect of stripes size on RAID performance, go HERE for a lot more detail on what you might expect from bigger stripes and clusters.

If you went over there and looked, you noticed that cluster size was also discussed. No, the Promise card does not adjust the size of clusters. That means you either have a copy of Partition Magic handy, or when you format, you’ll have to force the cluster size to the one you want. The DOS command for that is format x:/Z:x. What are you supposed to put in “x”? The multiple of 512 you want each cluster to take up.

In English:

  1. If you want 4K clusters, you type in: format x:/Z:8
  2. If you want 8K clusters, you type in: format x:/Z:16
  3. If you want 16K clusters, you type in: format x:/Z:32
  4. If you want 32K clusters, you type in: format x:/Z:64

Of course, the bigger the cluster, the more space it wastes. If you don’t know why, you don’t need to; all you need to know is that your files may suddenly take up 15-25% more room if you go to a performance maxxing 64K stripe/32K cluster setup. Whether that’s worth it for the extra few percentages of speed you get is up to you.

The Promise card also lets you stripe more than one drive on each IDE channel; the IWill doesn’t. While I did not try to put more than two hard drives in a RAID configuration, those who did reported diminishing gains from it. Those of you who need to do such a thing should be buying the Promise card anyway, but this is a minority of a minority. Most people reading this will only use two drives, and for them, this particular feature is effectively useless.

The IWill does let you plug in an external IDE drive, but there’s not exactly a ton of those around.

I didn’t test the backup (RAID 1) abilities of these cards; but those who did found both reliable. However, the more I think about this, the less useful this seems to the average member of our audience. Please correct me if I’m wrong, but most desktop data disasters don’t occur because drives flat-out die, they occur because the hard drive gets trashed for some reason. If mirroring copies everything that happens, the mirrored disk be just as trashed as the original.

How I tested

The cards were tested in a computer, the relevant components of which were:

  • PIII 550 overclocked to 733Mhz (thanks to Neil at Proton Computers.
  • AOpen AX63Pro Via Apollo Pro 133 board
  • 256Mb of Crucial Micron PC133 memory
  • 2 13.5Gb IBM 22GXP hard drives.

Four configurations were extensively tested:

  • Single drives
  • “Modified” Promise Ultra66 card using an 8Kb stripe and 4K clusters (“Promise Little Stripe”) and BIOS/drivers version 1.14
  • The same using a 64Kb stripe and 32Kb clusters (“Promise Big Stripe”) and BIOS/drivers version 1.14.
  • IWill SideRAID66 card (asked IWill for stripe data, but they didn’t get back to me) and 4K clusters, BIOS/drivers version 1.05

Since the AXPro’s ATA66 support is dubious at best, I spot-checked the single drives using the Promise and IWill cards as ATA-66 devices, and saw only a tiny (1-2%) improvement over the scores in the IDE channels.

All tests run using both Win98SE and Win2000 using FAT32. NTFS was spotchecked, but did not make any appreciable difference.

In single drive configuration, a 3Gb partition was used for all Win98 testing on IDE 1; a 5Gb partition was used for Win2000 testing on IDE 2.

For RAID configurations, the 3Gb Win98 partition came first, followed by the 5Gb Win2000 partition.

Tests used:

  • Winbench 1.1: Business and High Disk Winmark, Disk Access Time, Disk CPU Utilization, Disk Transfer Rate
  • Winstone 1.3; Business Winstone and (for Win2000), High-End Winstone
  • Ed’s Recently Invented Photoshop Disk Test (see below for details)
  • Ed’s Even More Recently Invented Sound Forge Disk Test (also, see below)

I ran these tests until I was satisfied I had gotten the full range of variation. Since Disk Access, CPU Utlization and Transfer Rate were very consistent, I usually only ran them twice.

The other tests I ran at least three times each, with additional testing to explore the Winstone Effect.

Winbench, Winstone, and Content Creation: Win98

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Business Disk Winmark4130493049505590
High End Disk Winmark13800173001730019100
AVS/Express9100101001040011300
FrontPage 9888700955009920092800
Microstation13600160001580019200
Photoshop69209570934010200
Premiere13600173001570020900
Sound Forge28400365003890040700
Visual C++16900233002250023700
Business Winstone30.030.430.330.7
Content Creation31.131.031.231.9

Some observations:

  1. Any form of RAID is an improvement over a single drive, at least for the benchmark. (We will see that’s not necessarily so when you actually try to do something.)
  2. Big stripes and clusters do usually increase performance, but not always. Nor is the level of improvement consistent; Premiere and Microstation got much more of a boost than other apps.
  3. The Promise Small Stripe might have been at a slight disadvantage against the IWill because I suspect that the IWill has a bigger stripe size. Nonetheless, though the IWill usually came in second, the differences were usually insignificant. We will further test performance in Photoshop and Sound Forge later on.
  4. Faster hard drive throughput barely affects general system benchmarks.

The other Winbench numbers? Big Stripe had the fastest access times (about 9.8ms) and lowest CPU utilization (about 3.5%). Little Stripe has access times of about 10.5 ms and CPU utilization of almost 6%. The IWill trailed both with access times of about 11ms, and CPU utilization of about 9.5%. Disk transfer rates were all about the same, generally a little less than 36Mb/sec.

The Winstone Effect

Do we get it when we go to RAID?

IWill SideRAID66

BenchmarkPre-WinstonePost-Winstone Difference 
Business Disk Winmark4940567014.8%
High End Disk Winmark173001970013.9%
AVS/Express103001420037.9%
Microstation161001920019.3%

Promise Little Stripe

BenchmarkPre-WinstonePost-Winstone Difference 
Business Disk Winmark4940626026.7%
High End Disk Winmark171002200028.7%
AVS/Express1020020700102.9%
Microstation156002490059.6%

Promise Big Stripe

BenchmarkPre-WinstonePost-Winstone Difference 
Business Disk Winmark5570688023.5%
High End Disk Winmark19300209008.3%
AVS/Express116001760051.7%
Microstation192002350022.4%

Obviously, Benchmark Viagra works here, too, though much more with Promise Small Stripe than IWill or his big brother.

Is every test affected as much? No. Sometimes, they even go down. The numbers for the first few HE tests tend to go up, and the numbers for the last few go down, but that’s not consistent. Nothing in this testing is broadly consistent.

Sound Forge, for instance, goes down a little bit on a single drive, drops 20% on Promise Small Stripe, but goes up a bit with IWill and Big Stripe.

Ed’s Homebrewed RAID benchmarks

I wasn’t too thrilled with the results I was getting from the commercial benchmarks after finding things like the Winstone Effect and comparing my results to others, so I decided that if RAID was supposed to be so good at moving big files around, I’d better do something that would measure big files being moved around.

Ed’s Photoshop Test

The test is pretty simple. Use PS5bench 1.11 (go HERE for more information) to generate a random 10Mb image, then repeatedly open and save it, using Photoshop’s timing mechanism to see how long it takes. Take that 10Mb image, increase the image size to 40Mb, 60Mb and finally 160Mb, and do the same thing. Finally, after you’re done with the 160Mb image, reopen and save those 10, 40, 60 and 160Mb files, and see if the open/save times are any different than when you were repeatedly opening and closing them.

I wasn’t concerned about anything other than opening and saving files because if you are seriously using Photoshop, you have enough memory to hold the image in memory without having to wait for virtual memory. In any case, the 160Mb file was meant to push the hard drive into using VM. Repeatedly opening and closing files (with one noted exception) did not make the process go faster; however, if you opened up a file after opening and closing others, the load time was significantly greater.

10 Mb file (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save0.40.40.40.3
Opens0.40.50.50.5
Saves0.30.30.30.3
Opens after full cycle1.20.80.90.7
Saves after full cycle0.30.30.30.3

No real difference here in repeated opens and closes, if anything, the RAID is a little slower in opening. To be fair, the Photoshop
timing tool only measures tenths of a second, and when the action only takes a split second, a speed increase would only be in the hundredths of second. However, the RAID arrays do bring back the file after it hasn’t been opened in a while a bit faster than a single drive.

40 Mb file (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save2.01.31.41.4
Opens1.81.81.81.8
Saves1.61.31.31.3
Opens after full cycle4.43.33.33.1
Saves after full cycle2.01.51.51.4

The RAID cards start to show some improvement, with Big Stripe doing a little better than the others.

60 Mb file (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save10.07.119.17.2
Opens6.3/2.85.6/2.85.3/2.85.0/2.8
Saves5.22.52.52.6
Opens after full cycle6.76.56.15.6
Saves after full cycle5.32.52.92.7

The “Opens” column needs some explanation. For some unknown reason, at 60Mb, the initial open took considerably longer than subsequent opens. This is the only time this took place; the Open numbers are extremely consistent across all situations at the other sizes.

Otherwise, much the same pattern as at 40Mb, except the IWill starts to beat the Promise card, even in Big Stripe mode.

160 Mb file (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save57.154.046.338.4
Opens27.519.320.318.4
Saves51.631.835.731.0
Opens after full cycle29.517.420.718.6
Saves after full cycle49.930.633.529.9

This is the one that separates the men from the boys. The IWill card does very well, even against Big Stripe, and usually beats Little Stripe.

In this case, the Winbench numbers in a sense are predictive, since the IWill card faired pretty well in the Photoshop portion of the test, and Little Stripe struggled.

But note a couple other things:

  • It takes some awfully big files for the RAID cards to show significantly superiority. If you’re a professional graphics artist, you do use files this big. If you’re not, you probably don’t.
  • As the files get bigger, the loading times and save times increase dramatically. Instead of seeing 35Mb a second transfers, or even the 10Mb predicted by the benchmark, you see loads at more like 5Mb, and saves at more like 3Mb. Even considering swaps in and out of VM, that’s quite a slowdown. Still a lot better than a single drive, but no miracles.

Ed’s Sound Forge Test

You probably don’t tinker around with 150Mb pictures too often. However, even if you just tinker with sound editing, you can’t help but deal with big files. So I decided after the IWill review that I needed either an audio or video benchmark.

I decided to use Sound Forge as my other real-life benchmark because:

  • It was part of the Winbench tests
  • I didn’t have videos around, but I did have CDs with music tracks I could use to test.
  • It was easy to find a demo of the program (Sound Forge 4.5 XP). 🙂

This benchmark has two parts. I took three albums tracks from a CD and converted them into WAV files of about 50, 125, and 210 Mb respectively. The first part of the test simply opens the files.

Since the Sound Forge demo doesn’t allow you to save files, I had to come up with a rough equivalent that the demo would allow me to do. Fortunately, the demo has an Undo feature, which is enabled. So what I did was process a Reverb cathedral effect for the longest track with Undo enabled, then did an Undo and timed how long that took to process.

I tried opening files repeatedly and found that Sound Forge did something that the other benchmarks and my Photoshop homebrew did not. Either Sound Forge doesn’t really get rid of files, or it retains some great pointers, because reopening a file usually took only a small fraction of the time it took to initially open, as little as 10%. Load something else in the meantime, though, and that goes away. It’s not something I’m going to include in these benchmarks, but if you ever hear of complete songs loading into a sound program in a couple seconds, that’s probably what’s happening.

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
50Mb Open14.1085.5125.3296.277
125Mb Open35.63514.67422.27519.325
210Mb Open59.98126.09426.99625.327
210Mb Undo124.664.171.864.78

RAID does very well here, though Winbench predicts comparative performance badly. The IWill card trails in Winbench but wins overall here. No 35 or 40Mb transfer speeds here. Not even transfer three times faster than in Photoshop, like the benchmark would have you believe.

Tomorrow, I’ll do the same thing over again for Win2000. Some more surprises.

Email Ed


Might as well get right to it:

Winbench: Win2000

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Business Disk Winmark6050783080308380
High End Disk Winmark13800190001910019000
AVS/Express18300197002160019900
FrontPage 98117000134000135000113000
Microstation24200306003220032300
Photoshop6470939090908900
Premiere11000184001730017900
Sound Forge11700162001630016800
Visual C++14100180001870019300

Yes, the IWill card did much better here than it did in Win98; and the Promise Big Stripe did much worse. There was a lot more variation across the board than I found in Win98, also, but what you need to get from this, if you don’t know this already, is how these scores compare to what they were for Win98. Let’s take the IWill numbers in Win98 and compare them (the same trend holds true for the others) to its numbers for Win2000:

BenchmarkWin98 scoreWin2000 score W2000 score compared to W98 score 
Business Disk Winmark4930783059% Better
High End Disk Winmark173001900010% Better
AVS/Express101001970095% Better
Front Page 989550013400040% Better
Microstation160003060091% Better
Photoshop957093902% Worse
Premiere17300184006% Better
Sound Forge365001620056% Worse
Visual C++233001800023% Worse

Not too good if you’re using Sound Forge in Win2000. Do you see how relying on a grouping of benchmarks isn’t such a good idea if you primarily use just one of them?

My little Homebrewed Sound Forge benchmark confirms that things do indeed move a lot more slowly in Win2000 than they do in Win98. However, Winbench indicates things are about the same in Photoshop, and my little Photoshop test doesn’t quite agree.

More on this later.

Winstone Effect Here, Too?

Yes, but not as much as with Win98. Since I’m beating you to death with charts, let’s just say that the usual suspects are still most susceptible, but the effect is 15-30% at worst, and usually about 5-10%.

Winstone and Content Creation Scores:

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Content Creation 200039.238.038.038.2
Business Winstone 9942.443.543.443.6
High End 9947.147.547.447.4
AVS/Express4.734.124.154.15
FrontPage 984.204.154.164.16
Microstation5.365.555.535.53
Photoshop7.126.896.876.96
Premiere3.964.014.014.03
Sound Forge3.664.254.134.13
Visual C++5.415.505.555.54

Hmmmm. Content Creation scores higher without RAID? That seems odd, but all the other tests tell us not to expect more than a couple of percentage points improvement at best from RAID.

But look at the scores; only two get affected a lot. AVS/Express goes down a lot when RAID is around, Sound Forge goes up a lot. Both are programs that use very big files (AVS Video, SF audio). Yet they went in different directions.

You may say, “but Ed, the Winstone and Content Creation scores are much higher than in Win98.” This is true. From the High-End Winstone, though, you can see a few sets of numbers jump up more than the others, and more importantly, jump down too. At least we get a breakdown for High End, we don’t for the others.

Ed’s Photoshop Test

Same test as in Win98, but much different results.

10 Mb File (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save0.90.40.40.4
Opens0.50.40.40.4
Saves1.00.50.40.4
Opens after full cycle1.10.70.70.7
Saves after full cycle0.90.40.40.4

40 Mb File (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save3.41.81.81.8
Opens2.01.81.71.7
Saves3.51.81.91.8
Opens after full cycle3.82.72.72.4
Saves after full cycle4.11.81.81.9

60 Mb File (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save23.111.013.711.3
Opens3.13.02.62.5
Saves5.43.12.82.7
Opens after full cycle4.83.93.93.6
Saves after full cycle5.42.72.82.7

160 Mb File (all times in seconds)

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
Initial Save87.344.149.545.0
Opens46.924.628.328.2
Saves95.846.855.949.7
Opens after full cycle52.727.430.630.9
Saves after full cycle93.145.956.047.3

Obviously, any version of RAID improves matters dramatically. The Winbench Photoshop numbers said that the IWill card was best, but that didn’t really prove to be the case until we got to 160Mb. But let’s compare the times a single drive took to accomplish these tasks in Win98 as opposed to Win2000:

Single Drive–Win98/2000 Comparison

10 Mb File

TestTime in Win98Time in Win2000

Difference

Initial Save0.40.9125% More Time
Opens0.40.525% More Time
Saves0.31.0233% More Time
Opens after full cycle1.21.18% Less Time
Saves after full cycle0.30.9200% More Time

40 Mb File

TestTime in Win98Time in Win2000

Difference

Initial Save2.03.470% More Time
Opens1.82.011% More Time
Saves1.63.5118% More Time
Opens after full cycle4.43.815% Less Time
Saves after full cycle2.04.1105% More Time

60 Mb File

TestTime in Win98Time in Win2000

Difference

Initial Save10.023.1131% More Time
Opens6.3/2.83.150% Less/10% More Time***
Saves5.25.44% More Time
Opens after full cycle6.74.828% Less Time
Saves after full cycle5.35.42% More Time

***See Part II for explanation of split times for Win98.

160 Mb File

TestTime in Win98Time in Win2000

Difference

Initial Save57.187.353% More Time
Opens27.546.970% More Time
Saves51.695.886% More Time
Opens after full cycle29.552.779% More Time
Saves after full cycle49.993.186% More Time

All RAID essentially does is get you back roughly to where you would have been with a single drive in Win98.

Ed’s Sound Forge Test

TestSingle DriveIwill SideRAIDPromise Little StripePromise Big Stripe
50Mb Open26.519.522.613.9
125Mb Open69.949.257.433.3
210Mb Open116.481.995.355.6
210Mb Undo194.4125.4148.3145.5

Promise Big Stripe usually rules, but IWill pretty easily disposes of Little Stripe for second and wins the Undo. Again, though, this is really an OS story.

Single Drive–Win98/2000 Comparison

TestTime in Win98Time in Win2000

Difference

50Mb Open14.126.588% More Time
125Mb Open35.669.996% More Time
210Mb Open60.0116.494% More Time
210Mb Undo124.6194.456% More Time

Tomorrow, I’ll take a look at some of the other benchmarking done on these products, and see how differences in hard drives change the results.

Email Ed


Be the first to comment

Leave a Reply