• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Dumb speed freak questions.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

-=Mr_B=-

Member
Joined
Aug 18, 2002
Location
Sweden
What would be the 5 fastest configurations to run harddrives in, not only limiting me to scsi, example

raid 10 scsi
raid 5 Pata
single Sata
and so on.. The single Sata is not an option, and the Pata raid 5 aint going to be fast at all, yeah, i know. What i DONT know, what options do i have, and what hardware would i need? should i try and convince my boss to go for a 2 channel UW320, and raid 10 ? Any other options that should render GOOD fast accestimes, and leave me with "somewhat" of a backup/failsafe solution, making me not having to backup 1TB a week, or similar ?

Thanks, im not quite the guru here, enlight me masters *kissing up to egos, dont get offended over me admitting* ;-)

B!
 
I would think you would want SCSI, for starters. CRC checking and the like will ensure data integrity, and 15k or 10k SCSI drives will have excellent seek times.

As for redundancy, RAID 1, 5 or 10 would be your best bet and I think RAID 10 would be the faster option.

RAID 1 uses n drives where n is a multiple of 2, You get n/2 * disc capacity in actual space, and the same space is used for an exact mirror of the drives.

Ie:
Four 100GB drives. 2x 100GB = 200GB of storage space, the other two drives mirror the first two.

RAID 5 uses n drives where n is a multiple of 3. You get 2n/3 * disc capacity storage space, and n/3 space is used for a parity check.

Eg: 3x 100GB drives gives you 2x 100GB = 200Gb storage space, two drives are striped (like RAID 0) to improve performance, the third is used to ensure data is correct.

RAID 10 uses n drives where n is a multiple of 4. You get n/2 * disc capacity space, and n/2 is used for an exact mirror of other drives.

Eg: 4x 100G drives gives 2x 100GB = 200GB space, which is striped across drives 1 and 2. Drive 3 mirrors drive 4.

RAID 5 or 10 should be faster than RAID 0.
 
Thanks for your input David,so far it seams you agre with my primary intention.. partly depends on how much space the boss decides we need to... How about "lower cost" solutions, aka, is there any hardware sata or pata raid cards, that would perform well enough to drop in as third or forth placers?
(assuming raid 10 takes the lead, raid 5 second place, and from here onward.)

(Oh, allready studdied a bit on different types of raid, just dont know what they turn out to in "real life" and how poor comparative performance is.)
Thanks.
B!
 
What exactly are you trying to accomplish with this RAID setup?

If you are going to be doing Video Editing for example, then RAID 10 would be my approach. If you are looking for large storage for say a file server, RAID 5 would be best.

It would be cheaper to purchase a good SATA RAID adapter and get some standard SATA drives and run RAID 5. It would be MUCH more cost effective vs RAID 5 SCSI. If you need tons of storage space, good performance, and good reliablity, get the RAID 5 SATA. If you need Enterprise reliability, not as much space, and great performance, then SCSI is your answer (but your pocket book had better be deep).

Again, it all comes down to what you intend to do with this setup.

Foxy

BTW we run a 200GB RAID 5 PATA file server and it works perfectly :D
 
The idea is to have a SQL server standing behind a set of 3 webservers, keeping all the data the webservers allready has, and the little that gets added on the few pages supporting that.

Then there is the other system, supposed to carry space available to our partners outside the building, for file storage.

Both systems kinda need a high reliability factor, and preferably a low maintanmence cost.
Raid 5 = a drive fails, replace it and your good to go in 99% of the cases (or so i been told)
Raid 10 = A drive fails, who cares, replace it, you'll hardly notice it in the first place, and losing 2 drives, simultaniuesly, and them being the same id on both chains, more then slightly unlikely.

Offcourse, in both cases you have to make backups "just in case".
Is there any other options i havent considerd here?

Then there is performance, i'we been told (yeah, again, told, webpages speak to me*-) that Raid 10 outperforms Raid 5 both in search, and transfer performance (due to mirrors, and parity checking)
Boss yells "performance is the key" , will there be a noticable differance on raid 5 or raid 10 (or sugested other methods) in acces times when going over a ftp? Im guessing access time might differ, but transfer rates from both setups should be more then able to fill most hookups we could get here (at present only a 4Mb connection anyway)
However, as said, the sql server WILL be on the internal lan, and accessed frequently, where as the storage server most likely will be "less accessed" (pictures and "stuff" on the webpages are offcourse stored on the webserver, atleast thats where they are today)

I think i coverd "most" now shoot,
all / any suggestions are welcome.
I ask since i know my skills and knowledge in the field lack, especialy lack of experience, thankfull for all the help and advice i can get.
B!
 
I'm gonna shoot Sonny a PM - he knows loads more than me about RAID :)
 
Having been through multiple RAID drive failure recovery too many times, let me stress that RAID is not and never can be a backup! Make plans accordingly.

Internal transfer rates for download are better with RAID 10 for the reasons stated, but later controllers are better at parity generations.

Given that you want to run the SQL DB and file server duties on the same platform with a desire for low maintenance costs, I'd go with the following hardware(assuming you are using a server chassis with PCI-X):

LSI 320-2X 2 channel U320 SCSI RAID controller. This will accept up to 512MB of cache memory and should come with 256MB stock. It will also allow good performance in either RAID level.

5x Fujitsu MAS3735 73GB 15K U320 SCSI drives. This allows for the creation of a RAID 10 array and a dedicated hot spare. If these drives exceed the budget, then the 36GB drives will work, but have a slightly lower throughput.

The drives should be split between the channels and in the event of failure, three drives per channel will not saturate the channel. RAID 5 can easily handle the tasks as well, but adds complexity. I typically use RAID 5 for larger arrays in external expansions and RAID 1 and 10 for internal arrays.

If you have a planar that supports zero channel RAID(ZCR), this is another possibility, though I prefer using hardware controllers for dedicated arrays, due to better configuration options.

Since this will be a business critical server, I can not recommend any IDE based drives. IDE, either PATA or SATA, does not perform CRCs on written data at the drive level(SATA can perform CRCs on data at the controller level) and can lose data to corruption on writes to bad sectors.
 
The different applications will have different hardwares, hopefully designed to meet eachs different needs, and be able to handle a more widespread and growing load.

Yes, the raid in it self is no backup, as sutch there must still be made backups, but nonethe less, raid 10 leaves you with a identical copy of each drive, and raid 5 adds "some" possabilities to rebuild lost data on a failed drive. this in it self is important, to minimize the data loss that would / could happen if the drive dies a thursday, and backup night is friday.

I'll look in to the mentiond drives, and raid card, so far i havent had the time. Thanks for your time/input (wont have the time untill tomorrow sadly)

B!
 
Back