• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

PROJECT LOG The 30 and 45 hard drive server idea

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
A Dell Power Connect is next on my list of network upgrades. I'm going to wait, though, since I said I would be buying some motherboards off a member here. Not to mention, 85mb/s is just fine for anything I want to save to my server.
 
redundant power can be added if thid were to need it though ;)
http://www.dark-circuit.com/directron/sorteditems/redundanta.php

http://www.directron.com/tc1350rvn3.html


I actually used a smaller one in an old xeon rig for a while until the BIOS committed suicide :)

*edit* 72 bay, hot swap 4U supermicro case http://www.supermicro.com/newsroom/pressreleases/2010/press100817_storage.cfm

The only major problem with the 72 bay is that they are all 2.5" drives. I know SAS will work, but am not sure if SATA will be compatible. It's probably in the specs, I just haven't looked.
 
ya when it comes to business class the retrofit usually is a good deal more costly unfortunately. Out of curiosity what was the old server that you converted to new SATA build with?



and on an unrelated note... RAID build time on a highpoint 2320 is sloooooooooooow :bang head

The old server was built by Atipa, dual Prestonia Xeons on an Asus PC-DL, 3Ware 9500 series, 40GB PATA OS and 1.2TB of 120GB PATA RAID-5. The case was configured with 16 hot swap PATA trays and backplanes (4). It was ugly, small and slow. I couldn't find compatible backplanes and eliminated them. I kept the base chassis and gutted everything else. The current incarnation is a single quad Nehalem, 6GB, Areca 1260ML, on a Supermicro board with KVM over IPMI dedicated. I haven't benched it, but probably get around 300-400MB/s from the array.
 
SAS is backwards compatible with SATA. My RAID controllers are all SAS, but I use SATA drives for mass storage.
 
I had heard rumors of incompatibility with either controllers or backplanes from our SAN engineer in Dublin. I had assumed compatibility prior to that, but didn't press for details. Then again, unless I get some sleep soon, I am likely not going to remember much accurately.
 
The incompatibility would have to be the controller itself. SATA/SAS pinouts are the same, cables are the same.

I believe the only different is the data that is transmitted over the cable.
 
It may have been incompatibility with the controllers running both kinds of drives. I know the later Dells can run either, but not both.
 
Yeah, I don't blame you. That is going to be my main concern, speed. If I can exceed gigabit speeds now, I'll be happy. If I can exceed 10gigabit speeds, I'm not sure what I'll do. Probably go fiber.

I would absolutely run a fiber cable from my server to my desktop for insane data transfer. No TCP/IP junk to get in the way.

Just finished the rebuild, formattted and ran crystal mark.

crystal mark 4.5TB RAID5.JPG

And for comparison this is what Im upgrading from

Code:
Seagate Baracuda 1.5TB *2 in RAID 0 on X58 board using intel matrix raid. (primary OS drive, single large partition)
-----------------------------------------------------------------------
CrystalDiskMark 3.0 x64 (C) 2007-2010 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 154.361 MB/s
Sequential Write : 141.203 MB/s
Random Read 512KB : 30.551 MB/s
Random Write 512KB : 27.744 MB/s
Random Read 4KB (QD=1) : 0.369 MB/s [ 90.1 IOPS]
Random Write 4KB (QD=1) : 1.286 MB/s [ 313.9 IOPS]
Random Read 4KB (QD=32) : 1.140 MB/s [ 278.3 IOPS]
Random Write 4KB (QD=32) : 1.345 MB/s [ 328.3 IOPS]

Test : 1000 MB [C: 94.3% (1317.8/1397.3 GB)] (x5)
Date : 2010/08/28 9:18:38
OS : Windows Vista Ultimate Edition [6.0 Build 6000] (x64)

same model drives as before (barracuda 1.5TB) but in RAID 1 gigabyte mirror. 120gig partitition, primary boot drive.
-----------------------------------------------------------------------
CrystalDiskMark 3.0 (C) 2007-2010 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 127.054 MB/s
Sequential Write : 73.348 MB/s
Random Read 512KB : 44.072 MB/s
Random Write 512KB : 48.812 MB/s
Random Read 4KB (QD=1) : 0.626 MB/s [ 152.8 IOPS]
Random Write 4KB (QD=1) : 1.287 MB/s [ 314.3 IOPS]
Random Read 4KB (QD=32) : 0.657 MB/s [ 160.5 IOPS]
Random Write 4KB (QD=32) : 1.354 MB/s [ 330.5 IOPS]

Test : 1000 MB [C: 8.3% (9.3/112.3 GB)] (x5)
Date : 2010/08/27 23:48:29
OS : Windows Server 2003 Enterprise Edition SP2 [5.2 Build 3790] (x86)


Remaining partition of the drives above, well now I know why I was having speed issues from this server now.
-----------------------------------------------------------------------
CrystalDiskMark 3.0 (C) 2007-2010 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 87.865 MB/s
Sequential Write : 70.055 MB/s
Random Read 512KB : 28.952 MB/s
Random Write 512KB : 42.996 MB/s
Random Read 4KB (QD=1) : 0.343 MB/s [ 83.7 IOPS]
Random Write 4KB (QD=1) : 0.790 MB/s [ 192.9 IOPS]
Random Read 4KB (QD=32) : 0.579 MB/s [ 141.4 IOPS]
Random Write 4KB (QD=32) : 0.854 MB/s [ 208.4 IOPS]

Test : 1000 MB [E: 87.5% (1124.1/1284.9 GB)] (x5)
Date : 2010/08/27 23:58:02
OS : Windows Server 2003 Enterprise Edition SP2 [5.2 Build 3790] (x86)

/end thread jack
 
Last edited:
So, here is the idea for this server. This specific case will be limited to 30 drives since I don't want to relocate the fan divider or be limited to ATX sized motherboards. It also saves me money on the backplanes and hard drives.

Backplanes:
There will be 6 total backplanes, 5 drives for each one. Each backplane will be mounted on standoffs to give clearance for the power and data cable, keeping it as small as possible. Each standoff will either be metal or plastic, depending on how I fasten them. The issue I have is there is 0 clearance between the bottom of the case and the next server. This means I can't use rivets as they stick out, same with screws. The metal is thin enough that I can't countersink the holes. The co-worker that has been helping me suggested to epoxy them to the bottom of the case. This makes it a bit more permanent, but I don't think that is really an issue.

Front "plate":
Since the entire front has been removed, I need to redesign the front of this case. I'm thinking either a thin sheet of metal or a chunk of plastic. This will house either 3x 120mm fans or 5-10x 80mm fans. If the backplanes support "usage" lights, I want a panel in the front to know what drives are being used and keep them in the same orientation/order so I know exactly what drive it is. This portion will be easy to design/fabricate.

Retention system:
The drives need some sort of retention system to keep them in place and from hitting each other. Ideally, this will also help reduce vibration/noise. There are going to be 3 support beams (left, right, center) that separate the drives and provide structural support. Each of these beams will be roughly 4" apart (edge to edge). To support the drives in the other direction, I'm going to use piano wire that is ~0.094" thick and silver solder it to the supports. The supports that are on the edge (left, right) are going to be L-beams and the wire will solder to the side. The wire will cross over the center support and will probably be soldered as well. I'm still trying to figure out how to secure the L-beams to the side of the case, but that is fairly trivial.

I'm not sure on the total cost yet. I also need to make a jig so that I can make these easier. In addition to that, I'll make a detailed parts list on what I bought, including tools and materials.
 
Last edited:
Not that I'll ever have the need for a server on this scale (or justification to spend the money), but I've gotta keep an eye on this. I love overkill projects...


Well, overkill for home use :)
 
So, here is the idea for this server. This specific case will be limited to 30 drives since I don't want to relocate the fan divider or be limited to ATX sized motherboards. It also saves me money on the backplanes and hard drives.

Backplanes:
There will be 6 total backplanes, 5 drives for each one. Each backplane will be mounted on standoffs to give clearance for the power and data cable, keeping it as small as possible. Each standoff will either be metal or plastic, depending on how I fasten them. The issue I have is there is 0 clearance between the bottom of the case and the next server. This means I can't use rivets as they stick out, same with screws. The metal is thin enough that I can't countersink the holes. The co-worker that has been helping me suggested to epoxy them to the bottom of the case. This makes it a bit more permanent, but I don't think that is really an issue.
Id say epoxy the standoffs to the bottom of the case is the best bet. Not like the permanence really matters much in the end unless you decide to completely scrap this whole thing at some point. But even then there are ways to remove them.

Front "plate":
Since the entire front has been removed, I need to redesign the front of this case. I'm thinking either a thin sheet of metal or a chunk of plastic. This will house either 3x 120mm fans or 5-10x 80mm fans. If the backplanes support "usage" lights, I want a panel in the front to know what drives are being used and keep them in the same orientation/order so I know exactly what drive it is. This portion will be easy to design/fabricate
. Id go with 120mm fans, allows for more air flow for less noise and IMO, look better.
Retention system:
The drives need some sort of retention system to keep them in place and from hitting each other. Ideally, this will also help reduce vibration/noise. There are going to be 3 support beams (left, right, center) that separate the drives and provide structural support. Each of these beams will be roughly 4" apart (edge to edge). To support the drives in the other direction, I'm going to use piano wire that is ~0.094" thick and silver solder it to the supports. The supports that are on the edge (left, right) are going to be L-beams and the wire will solder to the side. The wire will cross over the center support and will probably be soldered as well. I'm still trying to figure out how to secure the L-beams to the side of the case, but that is fairly trivial.

Get a thin sheet of rubber that will span across the supports and cut holes in it the size of hard drives then glue it to the supports. Simple retention +dampening system :thup:

I'm not sure on the total cost yet. I also need to make a jig so that I can make these easier. In addition to that, I'll make a detailed parts list on what I bought, including tools and materials.
 
Interesting idea with the rubber. How thick were you thinking?

It has to be strong enough to support 30 hard drives from moving. Since it really won't require much pressure to keep a drive vertical, the main "problem" is support from the bottom. Even then, that wouldn't be that difficult to hold (horizontal bars). I'll look into it.
 
Actually, now that I think of it, foam would be a much better option. Way easier to work with. Looking around to see what I can get.
 
Actually, now that I think of it, foam would be a much better option. Way easier to work with. Looking around to see what I can get.

I was thinking rubber for the durability and longer lasting qualities not to mention being 'sticky' when stuff is wedged against it. and I was figuring they would be resting on the backplanes from the bottom for support or am I thinking backwards?

As for thickness I was thinking 1/8-1/4" thick of semi-soft rubber and the holes being just very slightly smaller than the drives for good stability. Also if they were wedged in there a bit it would allow the rubber to bear some of the weight.

http://www.mcmaster.com/#rubber/=96tgf2

*edit* and you may be able to get the rubber cheaper if you look for alternate sources, like say, cutting up a door mat or something.

I was actually planning on doing this exact solution for a similar thing a while back when I needed to add storage to a server that had no more spots. I was going to use to sections of U channel aluminum and make 2 frames, with rubber inside the frames and holes just big enough for the drives. I would then space them apart the proper distance and rivet them to the floor of the case. And bingo, quiet, dampening hard drive storage ;) Only problem is I made a mistake when I ordered mats and got rubber that was too hard. And eventually the need disapeared.
 
Last edited:
Nothing yet, money is the main issue, but that might be "resolved" this week. Don't want to discuss details in the open.

I'm still planning and looking around, though.
 
Nothing yet, money is the main issue, but that might be "resolved" this week. Don't want to discuss details in the open.

I'm still planning and looking around, though.

So I should watch the news to see you doing a smash and grab of hard drives from a local microcenter? :p
 
Whoa whoa whoa, don't go telling everyone. How else am I going to get 45 drives to test this?
 
Back