• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

calling all Cluster-Lizards in the house

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

none1

Member
Joined
Aug 12, 2002
Location
Chicago, IL
SERVER, dual celeron 366 OCed to 550, abit bp6, <old school OCing :) > mandrake 8.2, LTSP add on, modified for seti with rogerdugans howto, currently also acts as my main internet gateway / router. (feels uncomfortable, LTSP not meant to be secure)

NODE 1, COMPAQ celeron 300, Oced to 450, diskless boot from LTSP.

NODE 2 PCHIPS M810CMLR, floppy boot to LTSP (darn sis lan POS), Duron 850

NODE 3 Gigabyte P4 GA-8SIMLP, P4 1.8 oced to 2400, complete diskless LTSP boot, PERFECT LTSP board, no cards, no keyboard, NONE

NODE 4 Gigabyte P4 GA-8SIMLP, P4 1.8 oced to 2400, complete diskless LTSP boot, (ya I got TWO)

NODE 5 MSI KT4VL AMD1600+ OCed to 1750 mhz, LTSP boot, no video, no keyboard, etc.

NODE 6 MSI K7D master L, 3.6 gig HD, boots linux mandrake 8.2 runs dual SETI amd XP 1600+ oced to 1700 mhz.

NODE 7 ausus A7n266, boots winme, 6.4 gig HD runs setique, amd1600+, NO OC :-( But plays Warcraft III flawlessly. :)

Node 8 abit KT7A raid (revision 1.1), amd XP 1600+, OCed to 1800 mhz (12 *150) 160 gig raid 0, winme, ya it does the XP fine, real good at unreal tourney with a gforce2pro card.

OCed old *****z still games wit da best. :)

NODE 9, old sony vaio superslim P2 366 notebook, win 98

Node 10 older Toshiba libretto, P166 win98, 60 hours per SETI WU. woot woot need all the WU I can get, GO TEAM JAZZ :)

One acerlink 8 port 10/100 swith, one linksys 4 port switch. Maxed out my ports, still need to install refurbed smc baricade firewall / router / 8 port switch, and then find 7 more nodes. :) Big Sacramento, CA computer show this weekend, wish me luck on finding more nodes :)

NOTE, NO CASES, NONE (cept for the compaq and notebooks)
 
OP
#Rotor

#Rotor

Member
Joined
Sep 29, 2002
Location
Bradford Ontario Canada
what cluster software are you using, I'd love to be able to get a windows boot to share in the cluster, but alas I'm yet to find one that can actually do that,

none1 said:
SERVER, dual celeron 366 OCed to 550, abit bp6, <old school OCing :) > mandrake 8.2, LTSP add on, modified for seti with rogerdugans howto, currently also acts as my main internet gateway / router. (feels uncomfortable, LTSP not meant to be secure)

NODE 1, COMPAQ celeron 300, Oced to 450, diskless boot from LTSP.

NODE 2 PCHIPS M810CMLR, floppy boot to LTSP (darn sis lan POS), Duron 850

NODE 3 Gigabyte P4 GA-8SIMLP, P4 1.8 oced to 2400, complete diskless LTSP boot, PERFECT LTSP board, no cards, no keyboard, NONE

NODE 4 Gigabyte P4 GA-8SIMLP, P4 1.8 oced to 2400, complete diskless LTSP boot, (ya I got TWO)

NODE 5 MSI KT4VL AMD1600+ OCed to 1750 mhz, LTSP boot, no video, no keyboard, etc.

NODE 6 MSI K7D master L, 3.6 gig HD, boots linux mandrake 8.2 runs dual SETI amd XP 1600+ oced to 1700 mhz.

NODE 7 ausus A7n266, boots winme, 6.4 gig HD runs setique, amd1600+, NO OC :-( But plays Warcraft III flawlessly. :)

Node 8 abit KT7A raid (revision 1.1), amd XP 1600+, OCed to 1800 mhz (12 *150) 160 gig raid 0, winme, ya it does the XP fine, real good at unreal tourney with a gforce2pro card.

OCed old *****z still games wit da best. :)

NODE 9, old sony vaio superslim P2 366 notebook, win 98

Node 10 older Toshiba libretto, P166 win98, 60 hours per SETI WU. woot woot need all the WU I can get, GO TEAM JAZZ :)

One acerlink 8 port 10/100 swith, one linksys 4 port switch. Maxed out my ports, still need to install refurbed smc baricade firewall / router / 8 port switch, and then find 7 more nodes. :) Big Sacramento, CA computer show this weekend, wish me luck on finding more nodes :)

NOTE, NO CASES, NONE (cept for the compaq and notebooks)
 

rogerdugans

Linux challenged Senior, not that it stops me...
Joined
Dec 28, 2001
Location
Corner of No and Where
The Linux Terminal Server Project is intended to do the same thing with linux that Windows Terminal Server does with Windows: allows thin clients to network boot and use the server's resources for processing.

The K12LTSP version I use in particular can be downloaded as 3 .iso images with the LTSP stuff built right into RedHat 7.3- perfect for linux noobs (like me) and schools, which is the designed audience...K12 being kindergarten through high school.

Its not a cluster in the true sense of clustered processing or even clustered storage.

We use the term Diskless Cluster though, because it is a Cluster of computers running without hard drives :)

K12LTSP Site
 
Last edited:

cmcquistion

IT Director Senior
Joined
Oct 15, 2001
Location
Tennessee
I'm working on a folding cluster, but I'm not sure if it is a true cluster.

Here's my idea. I've got about seven computers systems with motherboard, CPU, RAM, Hard Drive, Video card, NIC, and PSU. No case, no floppy, or anything like that. I'm going to mount all of the systems on stackable trays and have one monitor share between them (when I need it). The idea is for all of them to run independently, but all for the same cause, folding proteins 24/7. I have already installed Windows 98 Lite and the [email protected] client on all of the machines and I will install TightVNC on them, so I can monitor the whole system from my office (I'll use EMIII, also).

Is this a true cluster? Anybody got any tips for me? I'm kinda feeling my way around here.
 
OP
#Rotor

#Rotor

Member
Joined
Sep 29, 2002
Location
Bradford Ontario Canada
there are quite a couple of them. Beowulf being the BIGBOY, very powerful, but with the major drawback, that of needing applications to be specially written for it. kinda like apps not using SMP....

Mosix is by far the most versatile, runs on Linux and is a true hard-core cluster, or what they call load management lair.


Clusters for windows are hard to get, and they are expensive, as in $$$$$$$$$$ however Win2k AS apparently has clustering built in (don't quote me on this... ) but once again.... not just any app will be able to use it.
 

cmcquistion

IT Director Senior
Joined
Oct 15, 2001
Location
Tennessee
Hmmm, after reading more about the hardware requirements and such, I think I may be better off with my Windows based cluster.

For example, I only need 32 MB RAM on each station, a 10BaseT NIC, a cheap old 10BaseT hub, a small hard drive, and I don't have to have a dedicated server for my Windows solution.

With Mosix, I would need 64 MB RAM, 100BaseT NIC's, a 100BaseT network switch, a floppy drive, and a dedicated server.

If I were starting from scratch, I would probably go with Mosix, but I already have the hardware I need for my Windows-based not-quite-true-cluster.
 

rogerdugans

Linux challenged Senior, not that it stops me...
Joined
Dec 28, 2001
Location
Corner of No and Where
I am far from knowledgeable about Mosix, or process clustering, but I do know that Distributed Computing projects like Setiand Folding at home do NOT get anything from process clustering- thats why some of us are using the Linux Terminal Server Project software, slightly modified ;)

By default LTSP uses the server for drive space and all processing- with the Diskless Cluster setup that is used for DC projects you modify it to run programs locally and still use the server's hard drive- works quite well:D

Individual units of work do NOT get processed any faster, but you can get a boatload of computers running with just one hard drive.
 

Papsomax

Member
Joined
Dec 28, 2003
Location
Kansas City area
Here is a link of a thread I posted under the folding team. This is, as I call it my Farm1. The intent is to go with one server use Clusteroverclockix (or clusterknoppix) and have the clients boot via PXE with LTSP. Right now as you can see they all have their own os. All boxen are oc'd, but due to folding, I have had to back the oc down on all of the systems for stability reasons. Right now I use EMIII for folding monitoring, and all boxes are on a kvm switch.

Farm 2, will be a pure farm with a server and clients. I'm looking at 20 layers altogether (the plexi unit with hold 7 easy, and I have another unit for the remaining). I will be using one psu per 2 mobos (I use a psu cable splitter). As of now, I have a 1.3 and 1.6 Duron, 3x P4 2.8E's, and 1x Athlon 2200 (this will be the server). Since this will be a dedicated folding cluster only, I will probably have P4E's as the remaining layers. I will set up each layer via kb, monitor first to oc it, then will monitor via vnc or samba or karamba. I will have a 2 port kvm switch as a backup.

Hooah!