• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Sata X4 2T+ Raid0 PCIe 4-8x HARDware raid?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Psycogeec

Member
Joined
Sep 15, 2006
I am going through tons of specs trying to find a low cost raid internal card for my older computer, and i got 3 headaches :) and no card still.

The Specs
A) 4x 2T segate SATA psudo green drives (raid capable supposedly)

B) Asus P5B-Deluxe board with 2 slots available the small 1x and the 4x video slot
http://www.asus.com/Motherboards/Intel_Socket_775/P5B_Deluxe
(specs for motherboard)

C) Future life (the ability for this same controller to work in 2 years from now, with stuff like sata3 , and larger disks).

D) USE , video editing, and storing audio and video stuff, it is very rare for my system to have millions of tiny files, usually they are bigger files, and i move stuff around a lot, even though it could have parked.

here is some of the BS that is making this difficult:

1) faux raid vrses something with a controller, a processor, and a bit of fast memory buffers. The Marketing Jerks dont want to Identify the software based stuff, so i cant figure out which is which.
(note: the software ones are still usable, and i might even use one, but it is often night and day real world issue WHEN my cpus are being used 100% and much of the buses and other operations are going continually AND i am moving data, and surfing, and watching a show, and . . .)

2) 2+Terabyte drives , and creating 2T arrays that an older system like XP can handle (without more falables).
The controller itself has to support 2+T sized drives, then make arrays with larger combinations of drives (of course). then be able to hand the OS some normal looking chunks that it wont freak out about (2T or less).
I was able to pull this off via the Intel Onboard thanks to the way they provide control of the arrays via portions of the disk used for the array.
My drives today are Only 2T, but if i have to buy a good usable card, then it will need to have LBA64 bit support or whatever it needs to handle tomorrows technology.

3) Conflicting with onboard raid item, my board should not have this problem with most of the controllers. my board uses a ICH chipset for its raid stuff.

4) Control at boot times. the not so cheap Highpoint 640 (for example) requires some BS messing with the hardware in dos to controll who gets the boot, the onboard raid or the external card. (insert various swear words for how insane it is in todays day and age to have such poor control).
I should be able to Boot to either at will, because having 2 raids (like before) i always had 2 boots, so i could jump to the backup (controller and disks) if ever any part of one would die.

5) Proper "negotiation" of PCIe when there is less "lanes". the specs for the pci-e stuff indicates that a 8x type of raid card, SHOULD work fine in a slot that only has 4 lanes available, but you and me both know that people aint following specs and proper rules and doing stuff according to the book. My second long slot only has 4 lanes, not 8.

6) money, ok its the money isnt it ;) I was hoping to find something for around $200, but it doesnt look like that is possible. Not gonna find something USED, because it wont be the new technology :-(

7) SSD, not a big deal, i dont have enough money for 10Terrabytes of SSD, so something i want is going to be slow. A Single SSD is good and fast. so i am not worried about it now, and so far purposfull integration of SSD MIXED with HD to get the best of both worlds is not well done yet. So how it works with SSD is not an issue for me.

8) it isnt a net server, i use a backup (not mirroring). It is not for linux or unix or mac, just Winders, XP now, mabey W7 later, i dont need raid10 or 5 or 6, just simple raid0 striped for speed. Usually 2x2 and 2x2 as often 3&4x doesnt get much extra speed vrses the risk
.
 
Last edited:
If you want RAM buffer, dedicated processor, etc., you're going to be looking at $300 minimum. If you want to move up from 4 drives to 8 drives, then that goes up to $500. That's for retail boxed, of course. You may be able to find OEM and used pulls on eBay for far cheaper (I see plenty of Dell PERC 5i for around $100 on eBay)
If you settle for software RAID, see this thread.
 
i only need 4 drives, as i still have the onboard. i am thinking that a combo of both onboard and external could be usefull for copying from to.
before when i had 2 "controller" items seperated, i was able to get from to copy speeds that were very good, vrses coppying back to itself type of stuff.
when editing video , i would use one as the "source" and the other as output, and it was Way faster with old Pata drives, than even this onboard intel raid i have now,

i had read about the cost effectiveness of these PERC items before.
but i dont know a thing about them still, or if any of them support this wizz bang sata3 or will do a 3T drive , which i almost bought this week, but avoided to reduce one problem.
lots of them available used and cheap, but not enough simple user data to understand which one would actually be hardware based, and compatable with "new" stuff (or futureproof).

say i went to 300$ :-( card of some sort, which one should i get for my specs?

crappy raid hs been jerking my chain for too long. i gotta do something to get back to the Hardware raid , like i had 3 other times, the software stuff is always "slow" or slows the computer when needed most, and it diesnt seem to be all about the CPU use either, it is something that these system monitors dont show.
 
Last edited:
here is some of the BS that is making this difficult:

1) faux raid vrses something with a controller, a processor, and a bit of fast memory buffers. The Marketing Jerks dont want to Identify the software based stuff, so i cant figure out which is which.
(note: the software ones are still usable, and i might even use one, but it is often night and day real world issue WHEN my cpus are being used 100% and much of the buses and other operations are going continually AND i am moving data, and surfing, and watching a show, and . . .)
Easy way to tell is to look for a processor with a heatsink and onboard/external RAM. Other option is to just ask here or check online.

2) 2+Terabyte drives , and creating 2T arrays that an older system like XP can handle (without more falables).
The controller itself has to support 2+T sized drives, then make arrays with larger combinations of drives (of course). then be able to hand the OS some normal looking chunks that it wont freak out about (2T or less).
I was able to pull this off via the Intel Onboard thanks to the way they provide control of the arrays via portions of the disk used for the array.
My drives today are Only 2T, but if i have to buy a good usable card, then it will need to have LBA64 bit support or whatever it needs to handle tomorrows technology.
Without GPT, I don't think 32bit XP can use larger than 2 TB partitions. I don't have any experience with this, though, as I upgraded long before this was an issue.

3) Conflicting with onboard raid item, my board should not have this problem with most of the controllers. my board uses a ICH chipset for its raid stuff.
A RAID card should not interfere with onboard. Granted, this will vary between boards.

4) Control at boot times. the not so cheap Highpoint 640 (for example) requires some BS messing with the hardware in dos to controll who gets the boot, the onboard raid or the external card. (insert various swear words for how insane it is in todays day and age to have such poor control).
I should be able to Boot to either at will, because having 2 raids (like before) i always had 2 boots, so i could jump to the backup (controller and disks) if ever any part of one would die.
I've never heard of this being an issue. When the computer POSTs, the LUNs are passed to the BIOS and it should be an option to select whatever drive or array is on the system. For example, I can configure my server to boot to the RAID 1 array for the OS, the RAID 6 array for the data or a single drive that is plugged into the system.

5) Proper "negotiation" of PCIe when there is less "lanes". the specs for the pci-e stuff indicates that a 8x type of raid card, SHOULD work fine in a slot that only has 4 lanes available, but you and me both know that people aint following specs and proper rules and doing stuff according to the book. My second long slot only has 4 lanes, not 8.
I have not heard of this issue. The card should drop back to whatever number of lanes are on the system, otherwise it wouldn't work at all. Again, this depends on the motherboard/card. If they are poorly built, they will have poor results.

6) money, ok its the money isnt it ;) I was hoping to find something for around $200, but it doesnt look like that is possible. Not gonna find something USED, because it wont be the new technology :-(
Are you going to expand in the future past 8 drives? If not, check out a Perc 5/i. These are normally found in Dell servers, but they work in desktop systems. It is a real RAID card, as well. You can find them for $80+, depending on what it comes with. If you can find one with a battery backup and cables for around $120, get it. You can see my performance review on the front page. I used one card in my server for 2 years and two cards in my server for around a year. They work great, are cheap and are fairly common. The only downside is there is no way to expand past the 8 drive limit.

7) SSD, not a big deal, i dont have enough money for 10Terrabytes of SSD, so something i want is going to be slow. A Single SSD is good and fast. so i am not worried about it now, and so far purposfull integration of SSD MIXED with HD to get the best of both worlds is not well done yet. So how it works with SSD is not an issue for me.
If you do go the SSD route, put the disk on the motherboard directly, so you have TRIM support.

8) it isnt a net server, i use a backup (not mirroring). It is not for linux or unix or mac, just Winders, XP now, mabey W7 later, i dont need raid10 or 5 or 6, just simple raid0 striped for speed. Usually 2x2 and 2x2 as often 3&4x doesnt get much extra speed vrses the risk
.
If you are storing any data that you care about, this is a very bad idea. If you are going to be accessing the array over the network, I'd suggest RAID 5/6/10. My RAID 6 array is faster than 500 MiB/sec read/write, which exceeds the speed of my network multiple times.
 
Without GPT, I don't think 32bit XP can use larger than 2 TB partitions

32bit XP indeed cannot.
that is just the thing, i am running one right now, thanks to the way the intel "matrix" type works. they said it CAN NOT be done. but i am even Booting to it.
I teamed up 2x2T drives for a 4T set, then told the thing to form 2 arrays from that, using half of the disks, the 2 arrays are less than the 2T. that is all windows sees or knows.
(in this case the Bios is very important , without it setting up the array prior to the OS , it would not work)
I took the risk that intel could pull it off and a day later it actually worked :) amasing.
BUT
how likely is it that any other raid controller could do that? especially an older one with minimal user control. Sure they can talk to the WEB, and send you an E-mail, and do things that a Unix web server needs, but i dont need that stuff.

A RAID card should not interfere with onboard.
yes it should not ever, but when they use the same "chipset" some of them will. Also in the example of putting in 2 raid cards that would use the same driver. A person would think that they would be happy to Sell 2 , and make proper ways to accomplish such tasks, but it doesnt always happen.

This is true with Many card items, that use the same chipset same driver, but have different packaging. just try and get 2 of the exact same cheap plug in cards of various types. Often they are just poor "implementations" of some chipset they plopped on a board, and pumped out in china for a profit, they dont even make the driver anymore.
also
the "bios Memory size" issue can come into play with other bios based chipsets, with multiple bioses loading , they have always has a memory size issue , sometimes addressed by smaller footprint bioses. Smaller footprint could mean less features, or cheap tricks to get it to work. (good luck firmware updating when the computer wont even boot) Still a problem on even newer hardware runnning W7.

and other stuff that you would think in this day and age , they would both know about and have addressed properly.

that is what makes user recommendations so important. a Perc5i might be the greatest thing in the world for a linux server and 40 aging drives parked in a corner talking to its user via e-mail, but will it work with 3T drives, provide the array control i need, and be cool with the use of the computer , because the computer is doing something OTHER than serving web.
.
 
Last edited:
Without GPT, I don't think 32bit XP can use larger than 2 TB partitions

that is just the thing, i am running one right now, thanks to the way the intel "matrix" type works. they said it CAN NOT be done. but i am even Booting to it.
The only reason it works is because the individual partition is less than 2TB. The limitation is when you want one partition that is over 2TB.

(in this case the Bios is very important , without it setting up the array prior to the OS , it would not work)
I took the risk that intel could pull it off and a day later it actually worked :) amasing.
BUT
how likely is it that any other raid controller could do that? especially an older one with minimal user interface.
I honestly have no idea what you are referring to, here. RAID cards have a setup outside the operating system as well.

that is what makes user recommendations so important. a Perc5i might be the greatest thing in the world for a linux server and 40 aging drives, but will it work with 3T drives, and provide the array control i need?
.
I don't see why a Perc 5/i would be limited to a Linux server. LSi makes drivers for many operating systems. You should even be able to use the "RAID web console", just like I do for remote management and monitoring. I'm also not aware of any drive size limitations for this RAID card.
 
its not the limitations of linux , its the limitations of windows :) and i am using windows, i have much software that will only work in windows and will not work emulated, properly or quickly.
so it isnt anything negative about linux, it is just about the desired platform i am intending for its use.

Much of the effort for raid and information is about a DIFFERENT kind of use of the computer. that doesnt help MY specific uses as well.
If i was setting up a web server on linux, that information IS available, the setup i did with XP they said COULD NOT even exist, so how much help was learning raid from linux server users ?

a linux server spends its days pumping out web, that is the only thing MY computer will not be doing at all :) mine is more likly to be running 5-6 Different programs that all want everything. with 4X cpu things processing has been OK, but Little has been done to get hard drive (platters) for example to be 3-4 times faster than 5 years ago.
 
Last edited:
I think you may be reading my post incorrectly. You used Linux as an example and I clarified that there are drivers and software available for Windows and Linux, which I've personally used. There is no emulation, black magic or chicken to sacrifice; they are wrote for each operating system. In addition to that, the Perc 5/i is a modified version of an LSi card. LSi wrote the Dell firmware, software and drivers. The only difference is that it has a different banner in the program. You can see in my screen shot that mine says "Intel", but Intel did not design it.

Back to one of your other questions, now that I've thought about it. I see no reason to run two different RAID cards in the same system. If you want to run RAID, put it on the RAID controller. That is what you bought it for, so use it. Obviously, if ports were limited, this doesn't apply.
 
and that is what i was saying, i did get better results from controller to controller data shifting, better results than single controller. isnt that just like some devices that due to bottlenecks in places , they work "faster" (faster depending on which benchmark is used) on half duplex than full duplex.

2 sets of controllers 2 sets of caches , each doing thier own things unhindered by duplexing and warring with incomming while outgoing. any caching or "read ahead" that exists too.

I dont really CHOOSE and run out and buy 2 , the boards often come with something That sucks :) and i want to add in a real raid card too. when using them both together, and for different purposes , depending on what they can do.
Like
if i have 2 FULL bootable system using 2 seperate controlers, the raid card could DIE completly, but i can still boot via the internal. I have never seen a controller die, but i could assume that would be as big of problem as the hard drive croaking, which i have also never seen. so 2 is the ultimate backup. there are reports of peoples controllers dying, or mabey they just dont know what went wrong.
Plus all this STUFF can at anytime have issues due to electrical problems, Drivers and Operating system things, if some instalation screws me out of one, Cant Touch Me :)

plus, installing by CLONE, to or from raid to change things, that was also important to me, as reinstalling my entier system from scratch and re-adjusting it would take many many days. without a seperated controlling of any sort, it isnt as easy to get the system completly re-arranged and keeping the entire system and all software 100% functional at the same time.
(disk and data Juggling, when the data is as big as the disks themselves, and the time it takes to transfer the data when the disks are to slow)

like we had promices cheap (but real raid) controllers , and highpoint onboard, or and adpatec stuffed in something and a tyan board with a good onboard controller.

so i can think of real world reasons, even though it might not be nessisary or practical

but all of that stuff is DEAD and gone, because, they didnt support the new interface, or the disk size increases, or changes to the OS, so i have been limping along with the onboard raid which i dont like.
i keep updating the drivers , and trying to find drives with high platter speed, and the only thing that is REALLY faster is SSD . and that is to much money right now.
.
 
Last edited:
Thideras up in the $300 range

here is one of the cheaper but "newer" LSI things
http://www.newegg.com/Product/Produ...k=False&VendorMark=&IsFeedbackTab=true&Page=1
price is not terrible , add $20 for cables.

does it support 3T drives? well it is the new technology, but i did not find the words that say that. All the talk about 6G whatever speed that i wont ever see in my lifetime :) looks like it has processor and memory.

if this processor and memory are ONLY ever applied with a rebuild to itself?? i donno, nowdays they can have some really stupid stuff going on (in computers). one user reports rotten actual real world speeds. everyone is happy about its web serving, but i wont be doing that. They can cripple a cheaper item just because i am to cheap to buy the $800 one, this is the cheaper one :)

people can even say "software raid faster" yes when the hardware sucks or is crippled for some specific purpose or downgraded to fix a BUG that few users have. I donno.
.
 
Last edited:
I'm currently using a LSI 8408EM2 (mine is actually rebranded as an Intel SRCSASBB8I) as my RAID card. I picked mine up for $270 with battery backup and two breakout cables.

The card you linked is an actual hardware RAID controller and I considered purchasing that exact one a few months ago. I don't think it will have an issue with larger drives.

The onboard processor and memory are used all the time. The processor is there for parity calculations (among other important tasks) and the memory is used for caching or moving data around. During normal operation, you won't tax the processor much, but under rebuilds, you can easily max it out. Most RAID cards allow you to set the "rebuild speed" (among other speeds) so that the RAID card isn't unresponsive to you or other arrays during this time.

A good RAID card will always beat* software RAID, in terms of speed and drive numbers.

EDIT: Typo*
 
Last edited:
ok i will get it , thanks.

I hear about this rebuilding stuff all the time, if my drives suck so bad, or i crash so much or cause data issues , and ever had to do this rebuild thing very often , i am pulling out what is left of my hair and throwing something through the window :) freaking Days to rebuild that would hurt.

i am just the type who is better off with a whole disk backup in seperate locations, that way if something actually did go wrong, its -----> over there, and instead of some PITA slow rebuild it is A miracle that saves me :) plus if lightning strikes or another PSU starts my computer on fire , or some such thing, my backup is in a "different Safe".
or
if i toss the computer out the window :) especially.

If it is just a redundant backup, and the way these raid systems work, i dont want it Dead from use (spin-up) before i would actually need it.
 
Last edited:
ahh jeez, i was trying to find the 3ware forum , and ran across this
http://forums.storagereview.com/index.php/topic/30033-seagate-st2000dl003-and-3ware-9690sa/
In short, the Segate baracuda greens bailed on the guy, with 3ware controller.

Isnt that Freaking special :p
they have Ears working , even if 3ware doesnt recommend it, but the Segate "green" thing Claimed that they wont have the TLER problem.
sure , they didnt even get that far :)

Argg, hey we never had so many CHOICES and arrays of models before, and certannly never had this much Trouble , its a bloody conspiracy i tell ya :)
time to go back to simpler days, when they had ONE model and that one actually worked :)


So sorry, looks like i need the $6 chip to support those drives, as the $500 one cant even keep them recognised. Its Ok cause it is PROfessional :)
.
 
Last edited:
i was seriously going to get the hitachi 7k2T ones, but when i found out that even after much extra heat, and much extra power, they REALLY would not be faster for my uses. like 3-4% max ever for some operations.
(plus these segates i got were $30 cheaper)

also some noise about the hitachi 7k2T being taller in height, being 5 platters not 4. I probably would not mind "lower density" for magnetic media, but the extra height means it wont fit in some places, and will reduce some cooling (air passage) in other places.

If i went with 3Ts and could at LEAST get things to work with 2T today, that would have been cool too, but if they dont have enough "bits" to recognise them they read 750G .

1 step foreward, 2 steps back, if i ever get there , it will be because i went backwards around the whole world.
.
 
Last edited:
i was seriously going to get the hitachi 7k2T ones, but when i found out that even after much extra heat, and much extra power, they REALLY would not be faster for my uses. like 3-4% max ever for some operations.
(plus these segates i got were $30 cheaper)

also some noise about the hitachi 7k2T being taller in height, being 5 platters not 4. I probably would not mind "lower density" for magnetic media, but the extra height means it wont fit in some places, and will reduce some cooling (air passage) in other places.

The extra heat thing is a bit blown out of proportion.

Anyone that says the 7K2000 drives are larger in height, please forget everything else they said - that person probably can't find their way home from work without a GPS. I've run 10 of the 7k2000 drives for the last 2 years (or so) and absolutely love them. The new 7K3xxx drives are great too from a few I owned for a migration project.
 
thanks for that information about the hitachi drives.
about how much real speed do you think they offered in moving large files from place to place, Vrses some slower spinning drive? like the 5k of the same item?
 
thanks for that information about the hitachi drives.
about how much real speed do you think they offered in moving large files from place to place, Vrses some slower spinning drive? like the 5k of the same item?

Define "place to place". For me, it's all across gigabit so it really makes no difference as gigabit will be the inhibitor. From array to array, all internal? You'll see a speed difference, which will increase exponentially with number of drives. I'm too tired to give a more detailed response, but someone will be along with real world figured I'd guess
 
Define "place to place". For me, it's all across gigabit so it really makes no difference as gigabit will be the inhibitor.

disk to disk real world copying if it is in some sort of mirrored array, then some basic idea of single disk to disk rebiuild speeds in MB/s or time for some ammount of gigs or something?

most of my stuff i get lots of nice cute pretty benchmarks, but when i want to move data from one location to another it is average about 50MB/s , all well and fine , and thats what i get when the disks actually get data on them :) and most of them are like 80% filled (or i wouldnt have that space to begin with) I would like to see a minimum of 2 times that , just because it is about time for something to get Faster :)
 
I do have a real problem with heat here, even if it was not for the power consumption, 4-6 drives pumping out heat, added to the processor the memory and the heater video card. it is like sitting on top of a 100w bulb. I am in california, it is 100* outside, and trying to keep it below 80* inside, with 90+* temps pouring up from the computer , it isnt exactally a AC :)

each of these little 10W of waste heat comming off (any) of the drives combines together into a long running but slow space heater. often a space heater is very bad here :)

the processor cools out when it isnt actually working (dang lot of heat when it is) , the video card i can set to underclock and it will cool out quite a bit. drives on todays raid (thanks some to MS system) dont ever spin down or cool out anymore, to much trouble for manufactures to do the proper power management for everyone and have stuff work. The temp of the drives , is not ever getting out of control, so the drives are fine, it is the Heat that it pouring into the room, and the cost of CA Power , is more of the issue.
.
 
Last edited:
Back