• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

PROJECT LOG The 30 and 45 hard drive server idea

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Automata

Destroyer of Empires and Use
Joined
May 15, 2006
Hello all, I've been doing a ton of research this week. I've been doing a ton of stuff with software RAID and hardware RAID, so I'm in a good mood. To make that even better, I thought of an idea that would be absolutely amazing to see. I'm sure that some of you have heard of the Backblaze server, link if you haven't. The design is ingenious, 45 hard drives in a vertical orientation with backplanes. The problem is, these cases cost a blazing $750 each (not including the $450 for backplanes, or hard drives or other server components). Quite costly for a home setup. I've had my Norco 470 sitting in my rack doing absolutely nothing. I've been working with RAID this entire weekend and it got me thinking about doing this here.

I'm going to test this out and see if I can get this working. I'm using a Norco 470 that costs around $87 for the unit itself. Pick it up when it has free shipping, if you want it; otherwise that is another $27. The idea is going to be the same hard drive orientation to achieve the same result with substantially lower unit cost. With this case, you can run it with two different configurations; a 30 drive setup and a 45 drive setup. If you use a eATX mtoherboard, you will be limited to 30 drives since the motherboard takes up more space. If you stick with a normal ATX motherboard, you could do the full 45. It may even be possible to fill the ENTIRE CASE with these and use another server to process the data.

Bottom line: Depending on the size of the drives, you could fit anywhere from 30 tb (30x 1tb) up to a massive 90tb (45x 2tb) in a SINGLE 4U UNIT. Unit cost would be around $500 per case, not including any hardware (including backplanes). Combine this with software RAID and you have some extremely cheap space.

As of right now, I stripped the case down and verified that drives will fit. This will be a project log as I get the money to test this out. Depending on how this works, I may keep it and upgrade my other server to this case. It will all depends on the results.

norco_1.JPG


norco_2.JPG


norco_3.JPG


norco_4.JPG


Am I crazy? Absolutely.
 
Last edited:
Ran some numbers because it would be fun. If I were to do as many hard drives in a case as possible, I could do 15 backplanes. Remove two backplanes to fit the power supply and that leaves us 13 for the 4u case. For data, you can use a 2u server with a SATA card then run cables out the back to the 4u portion. This leaves us with 6u per "unit". In the 4u portion, you can fit 65 hard drives (13 x 5). Assuming 2tb hard drives, this would let you have a capacity of 130tb per 6u of server rack. With a normal 42u server, you could do 7 units per rack. This would allow for 910tb per rack. Almost an entire petabyte of storage. Once the 3tb drives hit, you could have 1.365 petabytes. That absolutely blows my mind.

I have no where near the money to make this happen, but I want to come up with the server design so that if someone wants hard drive space at this density, it is available.

I also have no idea what any home user would do with this much storage.
 
awesome!

I love massive storage setups!!! hard drives are sexy!

nick

Edit: so ur not making one? maybe even a small one... :(
My hopes and dreams have been dashed
 
My plans are to make at least one case as a proof of concept. If people want to buy them, I'd be willing to make more. The problem for me is actually filling the cases, not making them. Even with cheap 1tb drives, I'm looking at $1500 for a 30 slotter or $2500 for a 45 slotter. That is way out of my price range.
 
Awesome, subscribing to this thread, and keep us updated! I am very interested in this!
 
Mainly trying to figure out how to mount it. I found a place that would make that part for me, but it was a lot more than I was willing to pay.
 
was waiting anxiously for this to come up after seeing your initial thread. Really excited to see this get done! Im currently building my first generation of my primary file server at the moment (initial 5GB RAID5 array is building atm) but the case I had to use will quickly run out of space.

Are you going to be using a type of hotswap adaptation to this such as hack up some lian-li or CM 5-in-3 drive bay things or are you going with the a similar design to the backblaze box? And what type of backplanes will you be using? (as I cant find the chyangyang one backblaze used lol)
 
I'm going to use the same backplanes if I can. They are spendy, but I'll see if there are alternatives. I'll need to know exactly what backplane I'm using in combination with how readily they are available. I can then give exact measurements so others can follow or I can replicate my design.

I've basically got my design done, I just need to see how much it will cost to make.
 
Last edited:
Nice concept. Very similar to some of the commercial "silo" servers. Sun makes some 48 drive silo servers, using 3.5' SAS.

I converted an old 16 drive server from PATA to SATA as an archive server for work, partially as a proof of concept. Later, I added additional archive servers.

Supermicro makes high density servers with numerous hot swap bays. I've been using the SC846 24 bay SAS/SATA chassis for archive and larger servers. The SC847 has 36 or 45 bays in 4U. Your costs are lower, but I have to have redundant power supplies and other high availability items for work. I have also seen a 72 drive 2.5" model.

Currently, I have several 24TB formatted archive servers and am awaiting approval on a 20TB server to host a new backup and recovery platform.
 
Nice concept. Very similar to some of the commercial "silo" servers. Sun makes some 48 drive silo servers, using 3.5' SAS.

I converted an old 16 drive server from PATA to SATA as an archive server for work, partially as a proof of concept. Later, I added additional archive servers.

Supermicro makes high density servers with numerous hot swap bays. I've been using the SC846 24 bay SAS/SATA chassis for archive and larger servers. The SC847 has 36 or 45 bays in 4U. Your costs are lower, but I have to have redundant power supplies and other high availability items for work. I have also seen a 72 drive 2.5" model.

Currently, I have several 24TB formatted archive servers and am awaiting approval on a 20TB server to host a new backup and recovery platform.

redundant power can be added if thid were to need it though ;)
http://www.dark-circuit.com/directron/sorteditems/redundanta.php

http://www.directron.com/tc1350rvn3.html


I actually used a smaller one in an old xeon rig for a while until the BIOS committed suicide :)

*edit* 72 bay, hot swap 4U supermicro case http://www.supermicro.com/newsroom/pressreleases/2010/press100817_storage.cfm
 
Last edited:
Definitely, but the costs can get prohibitive. It actually worked out cheaper to buy a new case with backplanes and redundant power than to reprovision older cases. Sad, but true. This is based on business need though and not use at home. I raised enough eyebrows with the Global IT group, but it's cheaper than adding to the SAN for an order of magnitude more.

Now if I can just get time and funds for my own stuff...
 
Definitely, but the costs can get prohibitive. It actually worked out cheaper to buy a new case with backplanes and redundant power than to reprovision older cases. Sad, but true. This is based on business need though and not use at home. I raised enough eyebrows with the Global IT group, but it's cheaper than adding to the SAN for an order of magnitude more.

Now if I can just get time and funds for my own stuff...

ya when it comes to business class the retrofit usually is a good deal more costly unfortunately. Out of curiosity what was the old server that you converted to new SATA build with?



and on an unrelated note... RAID build time on a highpoint 2320 is sloooooooooooow :bang head
 
Yeah, I don't blame you. That is going to be my main concern, speed. If I can exceed gigabit speeds now, I'll be happy. If I can exceed 10gigabit speeds, I'm not sure what I'll do. Probably go fiber.

I would absolutely run a fiber cable from my server to my desktop for insane data transfer. No TCP/IP junk to get in the way.
 
Ya Though I was having issues with my current file server not serving up files at what I felt should have been possible speeds for gigabit and a single SATA drive but I could be wrong (~22MB/s tops) But since my new dedicated file server is using identical components to my main rig I should be seeing much better results, combined with being able to use bonded NICs and a real RAID 5 array.
 
Yeah, that is way lower than what it should be. Writing to my server's RAM drive, I get 86.8mb/sec solid (read or write) and that is low. Gigabit should be around ~120mb/sec. I think my switch is to blame, it is one of those "green" versions and I think it can't keep up. No matter what I change for NIC settings on the server or client, it maxes at exactly 86.8mb/sec. My Perc 5/i can do around 500mb/sec read speed on the 7 drive RAID 5 array. I just wish seek performance would be better. If I boot a VM or do any two disk activities at once, performance just plummets. When I was creating a 22 drive software RAID in a VM, I couldn't even listen to music on the server.

This project would allow me to put VM's on their own drive(s) to keep performance up. I can also let media have its own drive, along with backups.

I got a lot discussed with the co-worker that is assisting me in building this. I have a lot better idea of how I want to do this. I'm going to look at parts tomorrow to see what I can do.
 
For me I know its not my switch. Dell Powerconnect 2824 ftw. Ive seen mention in other places ([H] for example) that using cheap-o gig cards can have negative effects.
 
Back