Rather sooner than expected, we have a good-enough-for-now explanation from one of the participants as to what they expect short-term from CGPUs.
TGDaily has an article on the subject which includes what is at least an informed opinion by the marketing architect for AMD.
The article is well worth reading, but what the AMD person basically said was that Fusion was meant to appeal to two widely different audiences:
These are both limited, reasonable goals aimed at markets that are either big and/or lucrative enough to justify the effort.
The AMD person was quick to point out that Fusion wouldn’t kill the video card star any time soon, and as far as CPUs are concerned:
“I do not think that GPU functionality will become a standard feature in our CPUs. It is probably too early to tell and we will see how software will evolve. Based on the horizon I am seeing, there is still a need for discrete CPUs and GPUs.”
Eventually, this concept could move more to the mainstream, but the general attitude was, “Let’s do this first, and then we’ll see where we can go from there.”
This looks to be a very clever “bang for the buck” move by AMD. It’s good precisely because they don’t expect to save the world with it, or think this is for everybody. They have limited (but substantial) initial objectives, and it’s pretty likely they’ll meet them.
There will probably be some technical challenges, but they’re not really building a true CGPU, it’s more like making video the next-door neighbor.
That’s not terribly radical on technical grounds, nor should it be even terribly risky on financial/marketing grounds. It’s a fairly limited move, yet still one that should offer real advantages to the targeted groups. CPU architect snobs will probably find the first few iterations insufficiently elegant, but who cares?
If such a chip can chop $50 or more off the price of a cheap box, the OEMs certainly won’t mind, and it could give the Semprons some big advantages over the Celerons.
On the scientific side, well, it’s not like those folks go to CompUSA to buy software for their clusters, or sit around waiting for XTremeScience 8.0 to come out. If they can get a CPU that can do five times more of what they need to do for about the same, they’ll be happy to write the software for it.
Frankly, it makes a lot more sense than things like jamming quad-cores down Grandma’s throat in a couple years.