• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
That is a CRAZY amount of hard drives!

I am interested in hearing what OS you are going to put on the file server, and what FS you are going to use.
 
That is a CRAZY amount of hard drives!

I am interested in hearing what OS you are going to put on the file server, and what FS you are going to use.
I was going to go with 2k3 for sure, but I keep wanting to go back to CentOS like on my web server. This allows me to use the very advanced file systems that I keep hearing about.
 
Well, If I were you I'd stick with 2003. I plan to move my web sevrer over to linux some time but for my file server, 2003 just does the job so well, plus its already set up as where on my web server I am aready running Apache and MySQL so I may as well try linux. That is some f'n insanne HDD storage there man... If I were you I'd go with RAID 5. I would say RAID 6 but if one dies, you shouldn't have a problem buying another lookin at the amount you spent the other day :D
 
Meh, no reason to upgrade the NIC's :shrug:

I've been researching a ton, but I can't for the life of me figure out if the backplane on the Norco 4020 is a straight-through backplane or if it crosses over. I need to know what cable to order for my Percs!

Created a thread over at AVSForums, they had a huge thread on this case:

http://www.avsforum.com/avs-vb/showthread.php?p=16293970#post16293970
 
Last edited:
Nice project!

What OS for the file server?
Undecided at the moment. I'd like to go CentOS, but I had problems with the OS last time I tried it (wouldn't boot the live disc, etc). I did not try installing it. I had Fedora on it at one point, but I was beating my face on the keyboard more than when it had Windows.

Now that I got a little more experience with CentOS, I think I can get it up and running on that server.
 
If you are going to go with Linux, you should probably go with JFS or XFS. Both are B+ tree based, designed around a journal, and don't take huge amounts of space for overhead. JFS uses very little CPU in its transactions, and performs well no matter what type/size of file is being used. XFS performs better in large sequential accesses, and is capable of more I/Os.
 
Interesting. I'm not really looking for performance as this will be connected through a 10/100 connection for now and a gigabit connection in the future.
 
Its also worth nothing that if an XFS operation is halted by something like an unclean shutdown in the middle of the operation, the FS pads the remaing portion of the file with the null character. The journal was also updated first, so the only way to check if all files were actually copied would be to run a checksum on them.

Also, CentOS developers aren't backporting JFS to their kernel version (2.6.18), so any changes or bugfixes since then (a long *** time ago) aren't commited to the kernel. Because of this, you may want to use a newer kernel, something like 2.6.24, or a different FS. I wouldn't recommend Ext2/3/4 though, they aren't suited for this, and CentOS is only officially supporting Ext3.
 
Thid,

Are you building a FS to learn Linux, to learn a different OS or to have permanent storage?

The two Perc5's will be independant BUT you can enable mirroring via the OS.

If it is a Fileserver and NOT built for speed, go for redundancy via Raid 5 or similar.

What drives specifically are you using in the FS?

Do you currently have a gig NIC in place that is just running at 100mb?
 
Hi

Are you building a FS to learn Linux, to learn a different OS or to have permanent storage?
I'm pretty sure I can almost do anything with a little instruction, no where near a pro at it. So no, I'm not really in it to "learn" linux, just for mass storage.

The two Perc5's will be independant BUT you can enable mirroring via the OS.
Not sure what you mean here, I want the discs separate between the cards.

If it is a Fileserver and NOT built for speed, go for redundancy via Raid 5 or similar.
Yup, already running 4 1tb discs in RAID5 for media.

What drives specifically are you using in the FS?
  • 4x 1tb Seagate 7200.11 for media (Currently RAID5)
  • 4x 1tb Western Digital Green for storage (Currently offline)
  • 8x 1tb Hitachi for something (Currently being shipped, unsure of use)
  • 1x 320gb Seagate 7200.10 for OS
Do you currently have a gig NIC in place that is just running at 100mb?
The board actually has two gig NICs that I can use.
 
Hi

I'm pretty sure I can almost do anything with a little instruction, no where near a pro at it. So no, I'm not really in it to "learn" linux, just for mass storage.

Not sure what you mean here, I want the discs separate between the cards.

Yup, already running 4 1tb discs in RAID5 for media.

  • 4x 1tb Seagate 7200.11 for media (Currently RAID5)
  • 4x 1tb Western Digital Green for storage (Currently offline)
  • 8x 1tb Hitachi for something (Currently being shipped, unsure of use)
  • 1x 320gb Seagate 7200.10 for OS
The board actually has two gig NICs that I can use.

I would create a RAID5 backup of the data that is absolutely crucial to keep. on a seperate array if possible (compressed image if possible) seperate from the Perc5

Be careful using drives that arent enterprise class for critical data, You can run into issues during a rebuild using consumer grade drives.

Reading over that thread on the AVS forum, the OP has a bit of confusion going on, you shouldnt ever mix mode a machine like he is debating (sorry if it is your thread), quiet +storage array is dangerous at best. Gaming on a storage server is a bad plan as well, the additional heat output is bad for data.

We dont go Watercool or High flow air, or LN simply sor fun. The OP is playing a dangerous game with changing fans out for ones that are more quiet and lower CFM.
 
Be careful using drives that arent enterprise class for critical data, You can run into issues during a rebuild using consumer grade drives.
I'm very aware of what problems arise when the drive seeks too much. My father works for a government facility that houses huge amounts of data (not secret, they just store sattelite images of the planet, google maps uses their data, etc). He was telling me that sometimes during mass loads, drives can report bad and be removed from the array. They test them after that and they work perfectly fine. It took them awhile to figure out that the drives were seeking so hard and long that the arm was flexing during its seek and causing errors in the data! Unless I have a drive fail, I will never be hitting my RAID arrays that hard. Thank you for the heads up though.

Reading over that thread on the AVS forum, the OP has a bit of confusion going on, you shouldnt ever mix mode a machine like he is debating (sorry if it is your thread), quiet +storage array is dangerous at best. Gaming on a storage server is a bad plan as well, the additional heat output is bad for data.

We dont go Watercool or High flow air, or LN simply sor fun. The OP is playing a dangerous game with changing fans out for ones that are more quiet and lower CFM.
Exactly. I've been trying to figure out what fans are used in the middle seperator. I know the back is 80mm high speed Deltas, but they never showed what the middle ones were.
 
Forgot my previous posts. If you have Windows machines on your network, you should use NTFS & Samba so that the data is easily accessible to all machines, not just UNIX ones.
 
Forgot my previous posts. If you have Windows machines on your network, you should use NTFS & Samba so that the data is easily accessible to all machines, not just UNIX ones.
Yup yup, I know :)

That is how I access my web server's website files.
 
16 TB??:eek::drool: Man... I thought I was cool with 5x 500s :p Also, I have a Seagate 320 GB 7.2k drive if you'd like your OS mirrored.
 
Man my HTPC is developing a complex with only 3.25TB of space :p

I hate you Corey.

I just spent the last 45 minutes researching server OSes. and I do not even have a server (yet ;) !
 
Back