- Joined
- Dec 19, 2005
- Location
- New York
Battleship Mtron - Solid State Raid Performance Explored using the Mtron 16GB Pro
As a pre-requisite to this article, reading last weeks Mtron Professional 16GB Solid State Drive review would give you a great insight to current high end solid state technology. Thanks to our good friend Shawn at NeoStore.com, we had an additional 7 drives shipped overnight for this current review. So, as you can see in the "Battleship Mtron" picture above, our test bench had a total of 9 X Mtron 16GB SSD's (Solid State Drive's) in Raid 0. As you can imagine, a drive setup like this can cost upwards of $7000 at the present time. But, for all intensive purposes, we horded and jumped onto this technology fast at NLH. We decided heck, why not try and test out the theoretical maximums on throughput using these SSD's? Yes I know exactly what you're thinking: Who in their right mind would have a drive setup like this? This is what we do here at Next Level Hardware.com. We take hardware to the absolute maximum. When something is already fast enough, we try and make it faster.Before we begin with the review at hand, here is a few short sum-up's from last weeks Mtron 16GB Single Drive review.-The Mtron 16GB Pro SSD became the fastest SATA drive in the world in read operations and general usage, as compared to the WD Raptor 150. The drive actually produced a staggering 111 MB/s sustained read, and .1ms access time. Read Performance in real world scenarios was boosted in incredible multiples compared to the Raptor, while NAND Flash based sustained write and short write operations still suffered up to 23% over the WD Raptor 150 and was the only article negative. Mtron highly recommends using the NVIDIA 680i chipset or a pure hardware Raid controller for max performance using the drive. There is an apparent Intel ICH9R throttling issue and sustained transfer of the drive is capped at 81 MB/s when using Intel motherboards. Current pricing on the Mtron Pro line is still stuck at $50 per gigabyte or $799 per drive. Ouch!-Today we are going to be comparing and scaling the Mtron 16GB Pro under Raid 0 on a pure hardware raid controller. During the entire review we will have an assortment of different drive types and controllers at our disposal, and we will come across bandwidth limitations that we never knew existed until today. I must say, this drive setup has definitely made my PC end-user experience incredible. One word currently popping into my head to describe it is: Ludicrous!!
Our test bed consists of:
Intel QX9650 Processor , Corsair Dominator DDR3-1800, Gigabyte X38T-DQ6, Sapphire HD 2900 XT, Silverstone OP1000
Raid Controllers Used: Areca ARC-1220, Areca ARC-1231ML
Alternate HDD's Used: Maxtor DiamondMax 300GB IDE, WD Raptor 150 SATA
For our first boot up we chose to start off obviously with the lightest combo using a 2 drive Raid 0 setup on the Areca 1220 hardware Raid controller. Our HDTach benchmark results displayed an almost perfect multiple of 2X scaling using the drives. As you can see, our sustained read was a hair under 240 MB/s or (two drives X 120 MB/s) which would be absolutely perfect Raid 0 scaling.
Now comes the interesting part where I promised to tell you about bandwidth limitations with these drives. The Mtron Pro SSD's are so powerful (speaking in terms of read performance and IOP's input/output operations per second) that current generation hardware Raid controllers are getting stressed out to the complete maximum with them. Since these hardware Raid cards are inserted into PCIe 8X slots and they run with either full 4x or 8x lane compatibility, the cards do have plenty of theoretical bandwidth. However, the processor on each controller differs from the midrange ARC-1220 and the high end ARC-1231ML. The card we were using for the review initially was an Areca ARC-1220 with the 400mhz Intel IOP333 processor. Take a quick look at our next HDTach shot of 5 drives in Raid 0 and than we can do more explaining:
Since we already know that these Mtron SSD's have the theoretical capability to scale in almost perfect multiples using Raid 0, something is definitely wrong with the picture above you. Five drives put out only 386 MB/s sustained read when we should be anywhere from 550 to 600 MB/s easily. After countless hours of research about the Areca 1220, I finally stumbled across a gentleman's very informative post on one of the hardware review forums explaining about theoretical throughput maximum on the ARC-1220 controller. The limitation happens to be right around 400 to 450 MB/s max on the 1220. I had one of my suppliers overnight me an Areca 1231ML and I junked the 1220 immediately. The Areca 1231ML is an identical raid controller, the only difference being upgradeable cache, 12 SATA ports, and the #1 difference and reason for upgrading being the high end 800 MHz Intel IOP341 processor onboard. I simply plugged my existing array into the new controller and BOOM. Look what magically appeared:
Right off the bat using the Areca 1231ML and the same 5 drives, sustained read went up to 608 MB/s and burst jumped a couple hundred points to 1200 MB/s. This means our old controller had us capped at 400 MB/s throughput and took away an extra 200 MB/s of un-tapped power from the Mtron units. However, knowing in my head my full intentions of achieving close to 1 GB/s throughput with 9 or 10 drives, I did some research on the Areca 1231ML as well. It turns out not too many people really have this kind of problem using current generation mechanical HDD's. Single consumer level raid controllers are not usually meant to be scaling at 800 to 1000 MB/s sustained throughput. But, the only information I could find on the 1231ML led me to believe it runs out of steam right around 800 MB/s. So, again this was more bad news for me but I decided to continue on with my testing anyway. Our final drive setup was the extremely expensive, yet impressive 9 X Mtron 16GB Professional's in Raid 0. Here is our bandwidth shot using HDTach:
Again you can see that we have hit another limitation on the expensive and high end Areca 1231ML. But this time I am not mad, nor curious, nor doing anymore research on the subject! Our limitation is once again a capped out controller processor. This time it is the high end and enterprise praised IOP341. With an uncapped controller we should theoretically be at 1100 MB/s sustained read right now which would un-cork an additional 300 MB/s out of our current setup. The article will have to suffice with only 830 MB/s sustained read. Later on in this very article you will see how the 9 drive cap will effect high level server performance using IOMeter, but for now let us move into synthetic and real world performance testing. We will follow a very similar article outline to our single drive review while adding the rest of the drives into the test mix. Our synthetic testing is self explanatory but as usual for my real world testing I always like include a small disclaimer with the article: Next Level Hardware uses the traditional stop watch timing technique for real world performance measuring. So, although our results will be as close to humanly possible you always have to factor in a slim margin of error.
Our first test will be booting Windows Vista Ultimate Edition 32-bit. The timed reading you see in the screenshot is the average of 5 startups and shutdowns on the drives. Vista boot time is measured from as soon as you see the first bar move on the Vista screen and timing is stopped when the mouse on the hourglass stops loading services/resident programs on the desktop.
Again, as stated in the single drive review. You can clearly see that sheer random file reads, access time and drive latency is what has affected the operating system boot speed. We are seeing identical boot loading times when scaling under Raid 0 as compared to the single Mtron 16GB drive. Single Drive Vista Boot Up Video (Incredible Video)I have always liked analogies relating cars to computers because drag racing happens to be a huge interest of my own outside of the technology and hardware world. Application loading and boot performance for the most part received the greatest benefit from the single Mtron, just as I predicted. You will notice the largest benefit will be swapping out your single mechanical rotating HDD for an SSD when initiating random file reads and app loading. But, when raiding SSD's you are only adding horsepower, not torque if you can understand the logic. Here it comes: Horsepower is the speed at what you hit the wall, torque is how far you blast through the wall and how much damage you inflict in it. In drag racing torque is what gets you off the starting line and through the first 60 feet of the race, horsepower is what wins the race up top. Now place that analogy into computers. Imagine torque = access time/latency and horsepower = sustained read. One single SSD will give you below .1ms access time, so your torque is going to create the snappy feeling and instantaneous file loads. Now, even though you are adding additional horsepower (more drives in raid 0) your torque remains the same. So, unless you are loading apps/games that have a few heavier duty file loads. ie: larger chunks of files being loaded, not small little chunks. The games/apps will benefit from more horsepower during the load operation only in that instance. So, for the most part the Raid 0 array scaled incredibly and windows feels much nicer as a total package. But specific game/app loads are not something you should be jumping for joy about in a Raid 0 array and definitely not why you should be adding that second Mtron to your arsenal. However, as discussed previously a single Mtron 16GB SSD will increase OS boot time by up to 130% in some cases when using Vista.Our next series of tests will all be based on Synthetic Performance Testing. The first measurement will be recorded using a program called HDTach by Simpli Software. It is a tool to measure raw hard drive sustained read and access time.
As you can see, the raw power of the Mtron Solid State Drive is apparent using HDTach. It displays an average read increase of almost 30 MB/s over the mechanical WD Raptor 150 in a single drive configuration and when scaling under Raid 0 it is just astounding. Scaling 5 drives to 608 MB/s means that these drives when given a proper pure hardware raid controller setup will scale at 121.6 MB/s multiples meaning that they actually outperform the Mtron rated spec of 120 MB/s sustained read. On the other hand, the 9 drive Raid 0 setup with an 830 MB/s sustained read looks like a pretty number that will pounce on anything else currently out on the market, it is just not what we were expecting. Theoretically these drives have an additional 300 MB/s un-tapped just waiting to be ripped out but unfortunately we will need an ultra premium enterprise level controller with the throughput capability of over 800 MB/s. Once again, the most important number that you should be looking at is random access time. NAND Solid State will increase the snappy feeling of random file reads by over 80 times when compared to the WD Raptor 150.
Just to confirm these numbers, I have also used Lavalys Everest Diskbench to compliment HDTach. Later on we will use the IOMeter test suite as well for server level performance.
Battleship Mtron - Solid State Raid 0 Performance Explored
12/13/2007
Author: Dominick V. strippoli
Guys the original article can be viewed on my domain here:
http://www.nextlevelhardware.com As usualy you and XS are the first people to get a copy of the review.
As a pre-requisite to this article, reading last weeks Mtron Professional 16GB Solid State Drive review would give you a great insight to current high end solid state technology. Thanks to our good friend Shawn at NeoStore.com, we had an additional 7 drives shipped overnight for this current review. So, as you can see in the "Battleship Mtron" picture above, our test bench had a total of 9 X Mtron 16GB SSD's (Solid State Drive's) in Raid 0. As you can imagine, a drive setup like this can cost upwards of $7000 at the present time. But, for all intensive purposes, we horded and jumped onto this technology fast at NLH. We decided heck, why not try and test out the theoretical maximums on throughput using these SSD's? Yes I know exactly what you're thinking: Who in their right mind would have a drive setup like this? This is what we do here at Next Level Hardware.com. We take hardware to the absolute maximum. When something is already fast enough, we try and make it faster.Before we begin with the review at hand, here is a few short sum-up's from last weeks Mtron 16GB Single Drive review.-The Mtron 16GB Pro SSD became the fastest SATA drive in the world in read operations and general usage, as compared to the WD Raptor 150. The drive actually produced a staggering 111 MB/s sustained read, and .1ms access time. Read Performance in real world scenarios was boosted in incredible multiples compared to the Raptor, while NAND Flash based sustained write and short write operations still suffered up to 23% over the WD Raptor 150 and was the only article negative. Mtron highly recommends using the NVIDIA 680i chipset or a pure hardware Raid controller for max performance using the drive. There is an apparent Intel ICH9R throttling issue and sustained transfer of the drive is capped at 81 MB/s when using Intel motherboards. Current pricing on the Mtron Pro line is still stuck at $50 per gigabyte or $799 per drive. Ouch!-Today we are going to be comparing and scaling the Mtron 16GB Pro under Raid 0 on a pure hardware raid controller. During the entire review we will have an assortment of different drive types and controllers at our disposal, and we will come across bandwidth limitations that we never knew existed until today. I must say, this drive setup has definitely made my PC end-user experience incredible. One word currently popping into my head to describe it is: Ludicrous!!
Our test bed consists of:
Intel QX9650 Processor , Corsair Dominator DDR3-1800, Gigabyte X38T-DQ6, Sapphire HD 2900 XT, Silverstone OP1000
Raid Controllers Used: Areca ARC-1220, Areca ARC-1231ML
Alternate HDD's Used: Maxtor DiamondMax 300GB IDE, WD Raptor 150 SATA
For our first boot up we chose to start off obviously with the lightest combo using a 2 drive Raid 0 setup on the Areca 1220 hardware Raid controller. Our HDTach benchmark results displayed an almost perfect multiple of 2X scaling using the drives. As you can see, our sustained read was a hair under 240 MB/s or (two drives X 120 MB/s) which would be absolutely perfect Raid 0 scaling.
Now comes the interesting part where I promised to tell you about bandwidth limitations with these drives. The Mtron Pro SSD's are so powerful (speaking in terms of read performance and IOP's input/output operations per second) that current generation hardware Raid controllers are getting stressed out to the complete maximum with them. Since these hardware Raid cards are inserted into PCIe 8X slots and they run with either full 4x or 8x lane compatibility, the cards do have plenty of theoretical bandwidth. However, the processor on each controller differs from the midrange ARC-1220 and the high end ARC-1231ML. The card we were using for the review initially was an Areca ARC-1220 with the 400mhz Intel IOP333 processor. Take a quick look at our next HDTach shot of 5 drives in Raid 0 and than we can do more explaining:
Since we already know that these Mtron SSD's have the theoretical capability to scale in almost perfect multiples using Raid 0, something is definitely wrong with the picture above you. Five drives put out only 386 MB/s sustained read when we should be anywhere from 550 to 600 MB/s easily. After countless hours of research about the Areca 1220, I finally stumbled across a gentleman's very informative post on one of the hardware review forums explaining about theoretical throughput maximum on the ARC-1220 controller. The limitation happens to be right around 400 to 450 MB/s max on the 1220. I had one of my suppliers overnight me an Areca 1231ML and I junked the 1220 immediately. The Areca 1231ML is an identical raid controller, the only difference being upgradeable cache, 12 SATA ports, and the #1 difference and reason for upgrading being the high end 800 MHz Intel IOP341 processor onboard. I simply plugged my existing array into the new controller and BOOM. Look what magically appeared:
Right off the bat using the Areca 1231ML and the same 5 drives, sustained read went up to 608 MB/s and burst jumped a couple hundred points to 1200 MB/s. This means our old controller had us capped at 400 MB/s throughput and took away an extra 200 MB/s of un-tapped power from the Mtron units. However, knowing in my head my full intentions of achieving close to 1 GB/s throughput with 9 or 10 drives, I did some research on the Areca 1231ML as well. It turns out not too many people really have this kind of problem using current generation mechanical HDD's. Single consumer level raid controllers are not usually meant to be scaling at 800 to 1000 MB/s sustained throughput. But, the only information I could find on the 1231ML led me to believe it runs out of steam right around 800 MB/s. So, again this was more bad news for me but I decided to continue on with my testing anyway. Our final drive setup was the extremely expensive, yet impressive 9 X Mtron 16GB Professional's in Raid 0. Here is our bandwidth shot using HDTach:
Again you can see that we have hit another limitation on the expensive and high end Areca 1231ML. But this time I am not mad, nor curious, nor doing anymore research on the subject! Our limitation is once again a capped out controller processor. This time it is the high end and enterprise praised IOP341. With an uncapped controller we should theoretically be at 1100 MB/s sustained read right now which would un-cork an additional 300 MB/s out of our current setup. The article will have to suffice with only 830 MB/s sustained read. Later on in this very article you will see how the 9 drive cap will effect high level server performance using IOMeter, but for now let us move into synthetic and real world performance testing. We will follow a very similar article outline to our single drive review while adding the rest of the drives into the test mix. Our synthetic testing is self explanatory but as usual for my real world testing I always like include a small disclaimer with the article: Next Level Hardware uses the traditional stop watch timing technique for real world performance measuring. So, although our results will be as close to humanly possible you always have to factor in a slim margin of error.
Our first test will be booting Windows Vista Ultimate Edition 32-bit. The timed reading you see in the screenshot is the average of 5 startups and shutdowns on the drives. Vista boot time is measured from as soon as you see the first bar move on the Vista screen and timing is stopped when the mouse on the hourglass stops loading services/resident programs on the desktop.
Again, as stated in the single drive review. You can clearly see that sheer random file reads, access time and drive latency is what has affected the operating system boot speed. We are seeing identical boot loading times when scaling under Raid 0 as compared to the single Mtron 16GB drive. Single Drive Vista Boot Up Video (Incredible Video)I have always liked analogies relating cars to computers because drag racing happens to be a huge interest of my own outside of the technology and hardware world. Application loading and boot performance for the most part received the greatest benefit from the single Mtron, just as I predicted. You will notice the largest benefit will be swapping out your single mechanical rotating HDD for an SSD when initiating random file reads and app loading. But, when raiding SSD's you are only adding horsepower, not torque if you can understand the logic. Here it comes: Horsepower is the speed at what you hit the wall, torque is how far you blast through the wall and how much damage you inflict in it. In drag racing torque is what gets you off the starting line and through the first 60 feet of the race, horsepower is what wins the race up top. Now place that analogy into computers. Imagine torque = access time/latency and horsepower = sustained read. One single SSD will give you below .1ms access time, so your torque is going to create the snappy feeling and instantaneous file loads. Now, even though you are adding additional horsepower (more drives in raid 0) your torque remains the same. So, unless you are loading apps/games that have a few heavier duty file loads. ie: larger chunks of files being loaded, not small little chunks. The games/apps will benefit from more horsepower during the load operation only in that instance. So, for the most part the Raid 0 array scaled incredibly and windows feels much nicer as a total package. But specific game/app loads are not something you should be jumping for joy about in a Raid 0 array and definitely not why you should be adding that second Mtron to your arsenal. However, as discussed previously a single Mtron 16GB SSD will increase OS boot time by up to 130% in some cases when using Vista.Our next series of tests will all be based on Synthetic Performance Testing. The first measurement will be recorded using a program called HDTach by Simpli Software. It is a tool to measure raw hard drive sustained read and access time.
As you can see, the raw power of the Mtron Solid State Drive is apparent using HDTach. It displays an average read increase of almost 30 MB/s over the mechanical WD Raptor 150 in a single drive configuration and when scaling under Raid 0 it is just astounding. Scaling 5 drives to 608 MB/s means that these drives when given a proper pure hardware raid controller setup will scale at 121.6 MB/s multiples meaning that they actually outperform the Mtron rated spec of 120 MB/s sustained read. On the other hand, the 9 drive Raid 0 setup with an 830 MB/s sustained read looks like a pretty number that will pounce on anything else currently out on the market, it is just not what we were expecting. Theoretically these drives have an additional 300 MB/s un-tapped just waiting to be ripped out but unfortunately we will need an ultra premium enterprise level controller with the throughput capability of over 800 MB/s. Once again, the most important number that you should be looking at is random access time. NAND Solid State will increase the snappy feeling of random file reads by over 80 times when compared to the WD Raptor 150.
Just to confirm these numbers, I have also used Lavalys Everest Diskbench to compliment HDTach. Later on we will use the IOMeter test suite as well for server level performance.
Last edited: