- Joined
- Aug 5, 2003
- Location
- Scarsdale, New York
This question came up so I inquired on the Berkeley board here.
I was directed to a link on the Berkeley FAQ which answers the question about the time work units can be kept before returning. I understand why there is some confusion as it isn't written very clearly at all. It seems there is no fixed time limit. The situation is duplicate work units are issued for error checking. The longer you keep a work unit, the greater the chance an error checked result will already have been sent back to Berkeley.
In short, you shouldn't worry about it. Duplicate work units are deliberate and part of the process. One month is acceptable to Berkeley but two days is better from their standpoint.
Here is the link.
Here is the direct quote:
What is an "optimal" cache size to avoid returning obsolete results?
Caching is now more popular than ever. (See our add-ons page for some of the more popular caching programs.) In general, a 1-month cache isn't bad in terms of redundant results, but a 7-day cache is much better. An "optimal" cache (minimizing the likelihood that a result will have already passed integrity testing before you return it), however, shouldn't hold more than a 2 days of workunits. In the past, 1 month was pretty safe, but the ramifications of Moore's Law have made the cache window much smaller. Keep in mind, of course, that excepting the above case, redundancy is extremely important for testing the integrity of our data. Interestingly, even without cacheing, users on average will receive a duplicate workunit (one they've processed before) about once every 500 times.
I was directed to a link on the Berkeley FAQ which answers the question about the time work units can be kept before returning. I understand why there is some confusion as it isn't written very clearly at all. It seems there is no fixed time limit. The situation is duplicate work units are issued for error checking. The longer you keep a work unit, the greater the chance an error checked result will already have been sent back to Berkeley.
In short, you shouldn't worry about it. Duplicate work units are deliberate and part of the process. One month is acceptable to Berkeley but two days is better from their standpoint.
Here is the link.
Here is the direct quote:
What is an "optimal" cache size to avoid returning obsolete results?
Caching is now more popular than ever. (See our add-ons page for some of the more popular caching programs.) In general, a 1-month cache isn't bad in terms of redundant results, but a 7-day cache is much better. An "optimal" cache (minimizing the likelihood that a result will have already passed integrity testing before you return it), however, shouldn't hold more than a 2 days of workunits. In the past, 1 month was pretty safe, but the ramifications of Moore's Law have made the cache window much smaller. Keep in mind, of course, that excepting the above case, redundancy is extremely important for testing the integrity of our data. Interestingly, even without cacheing, users on average will receive a duplicate workunit (one they've processed before) about once every 500 times.