• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Building A Server... Need Input/Help!!!!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I just called him, and he said the previous server was roughly about 50k and he didn't even blink. I am going to shoot for about 50k with a little adjustment here and there.
 
dark_15 said:
I just called him, and he said the previous server was roughly about 50k and he didn't even blink. I am going to shoot for about 50k with a little adjustment here and there.

50k I can spec you out a monster. Why get 1 when you can get a few. Load sharing servers is probally the most efficant way to do it anways. Id look into the IBM blade servers too. 1U XEONs. They often send you 10 blades, even if you only order say 6, and you can call and have the others enabled remotley as the company grows.

YGPM also.
 
didnt read everything above - - but if you can put your SQL database on a seperate server from your applications - this should help performance considerably.
 
k, well atleast he's ready to shell out...as PM said the server itself is easy, it's the SAN disk that's gonna be something. I'm not sure how SAN's work exactly and how you can expand them and what not.
For most rack mount SAN disks they have 10-14 drive bays (80pin SCSI is all you'll find)
RAID 5 is the only thing to run...and usualy each rack will have 1 drive mounted as a spare (sometimes 2), and the rest in the RAID 5 array...one goes down the controller sees it, and starts to do it's parity check and rebuilding what was on the broken drive on the backup...some will send a message to admin, some won't....where I worked they didn't but we also had 1200+ servers so we had people hired who went around and looked for bad drives and replace them.
 
Xaotic said:
This needs to be a 5 9s server and the data has to be protected. SATA does not have parity checking for written data, SCSI does, as well as intelligent bad sector remapping.
Isn't that the same as " dynamic sector repair "? By rewriting unreadable data?

Maybe I'm mistaken but I thought the new 9xxx series 3ware cards could.


HMMMMMMMMMMMM
 
/thread passes over 7's head

How'd you guys get so damn good/how'd you get these jobs?
Doing this sounds like fun, especially building a monster like this.

Let me just say, I've seen some pretty pro IT configs, and racking it looks like the best idea. I've seen the fiber connection this office makes to the intarweb, and there's only 1. It's like an OC-192 connection, just nonchalantly plugged into the wall. Let me just say, depending on how big these files are, you're going to need some SERIOUS bandwith. I'm not sure what your office networking status is, but I'm sure you guys probably have Cat 5 installed in the wall. Gigabit ethernet might be another idea for you to consider if you really need that much speed and it's worth the trouble.

I remember checking out the rackmount config at my mom's office (UCLA). It's absolutely incredible the amount of stuff they have in there. They showed me this thing, about 8U I'd say, but it's like HDD storage array (clear). Every 3 hours or so, it scans the barcode on the back of the HDD and pulls it out, backs it up, and sticks it back in. SOOOO awesome. You might want to check into something like this if you're going to have as many people modifying files as I think.

Rackmount: I'd go with it. I mean, you're not getting much more expandable than a rackmount stuck in a room. You can easily add new things as they come out, new layers, etc. I mean, you could theoretically retrofit your system 5 years down the line if you need to.

P.S.: Have your boss buy a BlueGENE from IBM. For the low low cost of $5mil, you can become the #1 folder in the world :D

7
 
Just shoot said:
Isn't that the same as " dynamic sector repair "? By rewriting unreadable data?

Maybe I'm mistaken but I thought the new 9xxx series 3ware cards could.


HMMMMMMMMMMMM

I hadn't heard of that feature, but unless they've done something about the speed issues, I'd still avoid them. Their products are great for a lot of things, but speed demons they are not for some widely used RAID levels.

I'll have to look at the white pages on the cards to see how they've implemented the solution. If it's a true read behind function, then they may have solved one of my remaining qualms about IDE for mass storage. I despise soft errors and most IDE manufacturers spec to 8x10^-8. While usually fine, I just cannot trust data that is not verified and am willing to pay an overhead penalty to resolve the issue.
 
Last edited:
wow 50K wasn't the budget I had in mind...that will definitely change my recommendations lol.

Heres the big question...do you want to save some money by building it yourself, or have a company be responsible for the repairs, etc. IMO that is the first decision you have to make, and frankly it should be your boss's decision, since it has significant monetary ramafications.

That being said, I would build 3 servers....a true enterprise solution. 2 servers to run the SQL load balanced, and a third as your datastore, connected via a fiber card. This will save you money and headaches since you will avoid duplicating your datasource, as well as avoid the nightmare of real-time replication across both databases. (depending on your RDBMS, this can be next to impossible).

I would go Solaris or AIX obviously, leaning more towards AIX for this solution. Also if you're going to spend 50K, I'd want a hot-hot or at a minimum hot-warm failover scenario with 2 datastores. This is going to require some technical prowless to setup, but if one of your RAID disks fails, you'll be thankful. Plus it gives you another source of redundancy. This does not require real-time replication.

It looks like you're in good hands with fishy, but if you have any questions shoot me a PM.
 
also.. what ever you build.. if u do ..build it from parts. your going to need some kickassin psu ...for 4 and up hdds. and since you want something you can trust. i would go with a psu from http://www.pcpowercooling.com ..dont want your psu to die on the thing
 
Alright, here's what I've got going down:

2 Servers... and in each one:
Asus NCCH-DL
2x Xeon 3.2 GHZ
2x 1GB stick of Corsair RAM
2x Seagate Cheetah 37GB 15k SCSI
Adaptec U320 RAID Card - RAID 1 configuration
Chenbro EATX Case
PCP&C 510XE Turbo Cool
And some cheap video card...

These two servers will be running Windows Server 2k3 and MS SQL 2000. They will be load balanced...

For the storage, I have two options:
Option 1 - DIY!!!!
QLogic SAN Kit
IBM TotalStorage DS400
Seagate Cheetah 15K.4 146GB 80pin U320-SCSI 15,000RPM Hard Drive - I will need all 14 drives...

Option 2 - Let HP do the legwork...
HP StorageWorks Modular Smart Array 1000 Small Business SAN Kit
One minor problem... drives are proprietary - only HP Brand Drives work in this thing.


Now I have 4 questions...
What storage option would you choose? My boss has told me if it's going to be thousands of dollars cheaper, he would go with a custom setup...
And if you do not like the IBM box, where else should I look?
And also, do you like what you see on the servers?
Finally, how do I configure load balancing... or perhaps can someone give me a crash course in it? Is it software or hardware?? Or both?
 
Last edited:
I think for this price you need to consider a Tyan / DFI / SuperMicro server board with built in SCSI controllers - i just dont see Asus as being a "true" server board like the above companies.
 
^^^ As above while I was typing this out.

I would strongly recommend not going with a desktop chipset board. Running load balanced pairs is recommended, but you need to run stable, proven server chipsets. The Tyan from the preceeding page would be better suited for use, though rather limited in it's IO capability. A single 64/66 PCI bus may be a limitation on the server with the 875 chipset. Multiple peer PCI-X buses are far more suitable. For building your own servers, Tyan or Supermicro boards are intended for these purposes. They do not overclock or have many BIOS options, but they were intended for stability under load and reliability.

Registered ECC is also far more suited to server environments. Another reason to stay away from desktop chipsets.

The Adaptec is a noncaching host RAID controller and usually has subpar performance. The Intel or LSI are infinitely better cards, though significantly more expensive.

If I can get clear tonight, I'll see which boards and SAN options look like a better choice. Projects are trying to kill me right now.
 
I didn't think the Asus board would fly - besides, I need the PCI-X 133 mhz bus instead of a 66 mhz bus on the Asus Board for the fiber cards. Do I really need to purchase a seperate hardware RAID controller card for a basic RAID-1 array that will be used by the OS? Or would something integrated work just as well?

And also, I know how you feel about projects trying to kill you... thanks for the help!
 
I've used this Tyan MOBO, Its not cheap but the onboard raid has worked well for the operating system, I've got two running at the moment for about 1 month Now and they have just been rock solid. Major draw back is they don't have alot of upgrade options with only two slots. Has onboard video and dual 10/100/1000 Lan what more could you ask for?
 
Last edited:
Depending on which onboard U320 SCSI is present, and it will be, you should be able to use a zero channel RAID card on one of the multiple peer PCI buses. Software RAID is also one of those things to be avoided like the plague.

The fiber card would most likely run, but sharing the PCI bus with anything else at that point is a bad idea.
 
I don't see why the 875P is a bad chipset. Sure its a desktop chips set, but its also 30% less expensive than the server chipset out there for the XEONs. Its a very mature chipset, and faster than its server brother. I wouldn't count this out at all. Maybe the ASUS MB is a bad pick but its still solid as a rock.

It comes down to how much you really want to spend. I don't see any problems with this 'workstation' MB being used for a 5x8 server for 25 people. Its not like its going to be running a mission critical 100,000 person comapny.
 
I don't see it as a bad chipset, but I'd take it directly from Intel. The 875P chipset is intended as an entry level workstation/PC chipset for a single processor:

http://www.intel.com/design/chipsets/linecard/svr_wkstn.htm

Yes, the buses and some of the architecture has been changed to allow use of dual Xeons, but it remains, in my eyes at least, a desktop chipset. Server chipsets tend to be overengineered for stability, while this chipset is probably very close to it's limits. This is the logic behind my reasoning. Many other system engineers tend to be very conservative on adoption of hardware.
 
Last edited:
Back