- Joined
- Feb 20, 2001
I saw this over at [H].
Basically Stanford are thinking of starting a distributed storage project in order to find 1PB (thats a petabyte) of cheap, redundant, scalable and fast storage for Folding@Home data (of which they generate 2TB/month).
The basics seem to be that you say what sort of line you're on, where you are, and how much space they can use. They do some sort of striping thing to ensure redundancy and have the data replicated on ten hosts. They reckon 4 of the 10 hosts could fail and data would still not be lost.
Points would be assigned for participants, at the moment this is a minor issue but the basic idea is points per unit storage plus penalties for downtime/failures etc.
Posted at [H] here: http://www.hardforum.com/showthread.php?t=1306850
Paper on the subject: http://www.stanford.edu/~beberg/[email protected]
Basically Stanford are thinking of starting a distributed storage project in order to find 1PB (thats a petabyte) of cheap, redundant, scalable and fast storage for Folding@Home data (of which they generate 2TB/month).
The basics seem to be that you say what sort of line you're on, where you are, and how much space they can use. They do some sort of striping thing to ensure redundancy and have the data replicated on ten hosts. They reckon 4 of the 10 hosts could fail and data would still not be lost.
Points would be assigned for participants, at the moment this is a minor issue but the basic idea is points per unit storage plus penalties for downtime/failures etc.
Posted at [H] here: http://www.hardforum.com/showthread.php?t=1306850
Paper on the subject: http://www.stanford.edu/~beberg/[email protected]