• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

HP SAS Expander

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

cw823

Honeybadger Moderator
Joined
Dec 18, 2000
Location
Earth
Don't want to hijack thid's rackmount overkill thread.

I wonder, if I add one of these expanders connected to a cheap controller card in my ubuntu server, if I could access the drives on the expander. What does ubuntu do with a drive after sdz? Sure I could in theory have one massive array, but is that what I really even want? Or do I want a second array, but then, if it's redundant, it would be stupid to have it connected to my main array server.

Thinking out loud here.
 
The SAS expander itself isn't seen by the server, only by a controller. So, if it behind a RAID controller, you won't have an issue with naming.

Regarding what happens when you have more than 26 devices, I honestly can't find any information. If you want, I can test this out for you in a VM.
 
The SAS expander itself isn't seen by the server, only by a controller. So, if it behind a RAID controller, you won't have an issue with naming.

Regarding what happens when you have more than 26 devices, I honestly can't find any information. If you want, I can test this out for you in a VM.


And sdb-sdz would be 25 drives, and if they are all 2tb that's one hell of an array. I'm just not sure what I want to do yet for redundancy. Offsite is certainly preferred, but a bit unrealistic imho. I'm not sure how I'd keep track of changes and how often I could synchronize, especially if I've added a couple blu-ray rips.
 
Well, you certainly don't need to do offsite backup for your Blurays. ;)

I'm using Backblaze for offsite of files that I can't recreate/reproduce.
 
Alot of movies, 160+ tv shows, not stuff that I couldn't redo but not stuff that I want to lose, either.

It makes more sense (to me at least) to have two separate arrays, although if they are both in the same location and a nuclear blast goes off all data will be lost. But then if there's a nuclear blast, I'll probably be blown into a million pieces at which point I won't care what happens to my data.

Each case can take 15 drives, one being for OS (Ubuntu). So 14x2tb in a RAID6 should get me far more storage than I'd use.

WHS isn't that bad, I just don't like it's RAID1 disk extender approach as you lose alot of drive space. Maybe that'll be my backup array OS and I'll just turn duplication off.

Either way the SAS expander is a neat concept, I just can't justify the cost for my particular situation.
 
lol just found a deal on an areca 1680 & hp sas expander that might be too good to pass up.
 
Do eet. You could then create a build thread as well.
 
Areca 1680 & HP SAS Expander will be purchased Thursday. :santa2:
 
The fun part will be the migration. I have 9tb of data to migrate. I have ordered 4 extra 2tb drives. I figure I can remove one from the raid6, setup the new 4 plus that one in an 7.5tb array, copy until it's pretty much full, then yank the second parity drive and copy the rest. Would could possibly happen, right?
 
..... Would could possibly happen, right?

A big bad mojo lightning strike and passed thru all your super duper surge arresters while you're in the middle of transferring ... :eek: ... j/k . :D

Sub'ed, interested at that expander thingy, keep us updated Chad ! :thup:
 
As long as my flux capacitor stays online I should be ok.

I'll definitely end up with more storage than I need once I get everything migrated and back online, though. What are the chances I lose a drive somewhere along the way?

With you on the expander, my plan is a primary server with the Areca 1680, which will feed another case with 15 drive capacity with just a dummy motherboard and PSU in it. Should work......should.
 
Don't want to hijack thid's rackmount overkill thread.

I wonder, if I add one of these expanders connected to a cheap controller card in my ubuntu server, if I could access the drives on the expander. What does ubuntu do with a drive after sdz? Sure I could in theory have one massive array, but is that what I really even want? Or do I want a second array, but then, if it's redundant, it would be stupid to have it connected to my main array server.

Thinking out loud here.

I found this on another forum.

this is entirely kernelspace (devfs?) and in 2.6.15+, this is mostly userspace (udev).

For example, *BSD and its cousin, Mac OS X, create device files in the /dev/diskxsx style: disk1s1 is the first partition of the first disk, etc.

Anyways, to answer the damn question, often there will be two letters added: /dev/sdaa1, /dev/sdab1. This happens on RHEL and SUSE, according to this authoritative-looking IBM handbook.

Don’t put complete faith in any of this—it seems like this is very possibly something that varies across architectures/drivers/distros/whatever.

I have seen something similar to this with a Linux host connected to SAN storage. Essentially it was presented with hundreds of devices like this /dev/sda102 then you would use a host based volume manager to group what you need into useable storage. But it is very possible that what I was looking at had a special driver installed to present the devices in a special way since I read you can't have more than like 20 something logical devices after the physical (sdX) part.

I'm pretty sure the controller would show to Linux each drive by itself (JBOD). In my setups I prefer it this way and would use mdadm to do the RAID setup.

For a setup like what you were talking about with 25 drives, I would say If RAID 1 do what ever, you could do 12 drives mirrored with a hot spare and it should work right.

For RAID 5 don't do a RAID 5 array with more than about 7 drives or you could be looking at a higher possibility of dual drive failure. And for RAID 6 do a little less than double for RAID 5 so about 12 drives with 10 usable for data.

I would suggest something like five RAID 5 arrays using five drives each then using LVM to make logical volumes on those arrays and split up the usable data as you see fit, you could just group all the arrays into one big volume if that is your goal. With 25 drives this would be a safer config option and you would still have 20 drives for data.
 
Back