Think, when you fold in 2d or are away, your graphics card is just sitting there doing nothing but consuming ppower and dissipating heat. Wouldn't it be neat if one could devise a program to make use of a graphics card's core logic? Think of all the plusses: A lot of memory bandiwdth between the core and GPU, much pipelining and plenty of framebuffer memory to store/cache the data.
I suppose the only limitation is that a GPU is really not a CPU, and while it could process data the data would have to be sent in some kind of packeted format and computed using the vertex shader (or pixel shader for that matter, it's more precise.) It could then be sent back across the AGP bus and interchanged between the CPU and memory.
To me it just seems that many operations in folding could be run through the vertex shader (to help with morphing computations) instead of all being done by the CPU.
I am no expert in how the code works and as such am unaware of the feasability of such a thing.
It could, however, dramatically icnrease the folding speed on many systems. And it means one less piece of silicon that's just sitting there doing nothing.
Good use for a card eh?
I suppose the only limitation is that a GPU is really not a CPU, and while it could process data the data would have to be sent in some kind of packeted format and computed using the vertex shader (or pixel shader for that matter, it's more precise.) It could then be sent back across the AGP bus and interchanged between the CPU and memory.
To me it just seems that many operations in folding could be run through the vertex shader (to help with morphing computations) instead of all being done by the CPU.
I am no expert in how the code works and as such am unaware of the feasability of such a thing.
It could, however, dramatically icnrease the folding speed on many systems. And it means one less piece of silicon that's just sitting there doing nothing.
Good use for a card eh?