• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Insanity...3 GPU's on one card!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
The larger surface would also come with more working parts so the surface are wouldn't compensate the heat and clockspeeds would most likely suffer to avoid hotspots and weaker spots.
As was mentioned before faulty chips would be costly. I wouldn't be surprised if soon multidie cards appear similar to intels C2Q, more smaller dice in one package make up the chip on the card.
Probably would result in better yields and smaller cards than the 3870X2.
 
The larger surface would also come with more working parts so the surface are wouldn't compensate the heat and clockspeeds would most likely suffer to avoid hotspots and weaker spots.
As was mentioned before faulty chips would be costly. I wouldn't be surprised if soon multidie cards appear similar to intels C2Q, more smaller dice in one package make up the chip on the card.
Probably would result in better yields and smaller cards than the 3870X2.

And maybe then, breaking the 1ghz GPU clock would be the norm:eek: I always wonder what goes on deep inside the R&D labs. They seem to blame the CPU for the bottleneck on these late cards. Maybe it's the GPU clock thats far to slow. If proper C2D/Q style GPU chips are far too expensive to manufacture then they should at least make a couple for R&D then tease us with the benches they get! I could imagine 1.8Ghz + GPU clocks with GDDR4/5 being pretty sick:attn:
 
And maybe then, breaking the 1ghz GPU clock would be the norm:eek: I always wonder what goes on deep inside the R&D labs. They seem to blame the CPU for the bottleneck on these late cards. Maybe it's the GPU clock thats far to slow. If proper C2D/Q style GPU chips are far too expensive to manufacture then they should at least make a couple for R&D then tease us with the benches they get! I could imagine 1.8Ghz + GPU clocks with GDDR4/5 being pretty sick:attn:

Who blames the CPU for bottlenecking newer GPUS? From the benchmarks I've seen playing at a nice resolution in newer games the GPU is always holding back (assuming you're running a 3ghz+ chip).

Theres no magic wand for GPU performance, the people in thos R&D labs know more than any of us. Theres 2-ways to increase performance, increase parallelism, and increase clock speed. The more complex a chip is the harder is it to get it to clock high. However, parallelism requires more complexity either in the form of more cores, or more complex cores. To have high clocked GPUs of the type you mention ati/nvidia would have to either simplify their GPU cores or wait for better manufacturing technology. They can't just magically make a 2Ghz GPU core.
 
Theres no magic wand for GPU performance .....They can't just magically make a 2Ghz GPU core.

I never mentioned anything about magic wands, magic or an instant time frame now did I. Did you read that post upside down? lol 2Ghz as well? Pushing it arent we, i only metioned 1.8Ghz for the imaginative future :drool: j/k:p

..the people in thos R&D labs know more than any of us....

^^Obviously. But what's your point? I know my end point was purely stating a futuristic, yet comical idea, hence the futuristic clock speeds and tech teasing idea haha

The other half of the reply was great though, hands down. I learned something from that. Cheers:beer: When you mentioned increase in parallelism, would that increase the amount of instances swallowed up by synthesis tools?
 
I never mentioned anything about magic wands, magic or an instant time frame now did I. Did you read that post upside down? lol 2Ghz as well? Pushing it arent we, i only metioned 1.8Ghz for the imaginative future :drool: j/k:p



^^Obviously. But what's your point? I know my end point was purely stating a futuristic, yet comical idea, hence the futuristic clock speeds and tech teasing idea haha

The other half of the reply was great though, hands down. I learned something from that. Cheers:beer: When you mentioned increase in parallelism, would that increase the amount of instances swallowed up by synthesis tools?

I'm sorry I interpreted your posts as questioning why don't GPU makers make highly clocked chips as if they simply didn't want to. I apologize for coming across harsh.

I'm not sure what you mean about having instances swallowed up by Synthesis tools. Most tools will optimize out redundant parts, but they are fairly smart when it comes to identifying parts that are truly unnecessary compared to ones that are there to increase parallelism. This is especially true when a skilled engineer is at the helm.

When you talk about increasing parallelism that doesn't involve adding more cores, we are talking about instruction level parallellism (ILP). Strategies used to add ILP include pipelining, out of order execution, and enhanced instruction sets like SSE and MMX.
 
I'm sorry I interpreted your posts as questioning why don't GPU makers make highly clocked chips as if they simply didn't want to. I apologize for coming across harsh.

I'm not sure what you mean about having instances swallowed up by Synthesis tools. Most tools will optimize out redundant parts, but they are fairly smart when it comes to identifying parts that are truly unnecessary compared to ones that are there to increase parallelism. This is especially true when a skilled engineer is at the helm.

When you talk about increasing parallelism that doesn't involve adding more cores, we are talking about instruction level parallellism (ILP). Strategies used to add ILP include pipelining, out of order execution, and enhanced instruction sets like SSE and MMX.

Oh no problem i meant no harm done either way, i was just having a laugh 'n' giggle:bday:. Anyhow the way you explained that makes me realise why the GPU is a bit of a complicated piece of tech. To me it's mind boggling how they would simplify the design or how much more performance can be obtained by using the cell library manufacturing process. Then again I'm abit too doubtfull sometimes becuase after all ATI seem to be getting quite close'ish to a standard Ghz GPU clock speed with each new card they release. I need to get myself a book about graphics card design i'd love to learn more, interesting subject!
 
What's interesting is that Intel will be pushing for much more CPU oriented graphics power (Larabee will be just the beginning). It would be very weird to hear Nvidia going out of business or being bought if that trend really takes off.

Intel + Nvidia, game over ATI / AMD

Speculation is fun :)
 
The fact is that GPU's are not beign used for their computational power... When they are, you will see the CPU becoming all but a nice bit of extra's. Lets face it, the 8800GTS can run over 1,000 threads at the same time...
 
Back