- Joined
- Jan 10, 2012
here's how i see the pcie 2.0,3.0, 4.0, I run gtx760's in sli, I will never use all the banwidth of any of them, if i ran quad titans, then perhaps.
Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!
here's how i see the pcie 2.0,3.0, 4.0, I run gtx760's in sli, I will never use all the banwidth of any of them, if i ran quad titans, then perhaps.
Faster card = more data is going through pcie bus. In this case there is nearly no difference if you are using PCIE 2.0 or 3.0 as even 2.0 is fast enough for high end graphics cards. In some cases you may see little drops in performance on the fastest cards but maybe 1-5% max. Nothing that you will notice.
It's also the main reason why PCIE 3.0 x8 is good enough for SLI/CF as it's as fast as PCIE 2.0 x16.
X16. On a not terribly related note, dual gpu has a plz chip on it for intragpu communication.
It has a chip that makes hardware think that these 2 GPUs are working in SLI/CF and uses full bus bandwidth. So you see 1 card but system sees 2. That chip adds communication between these 2 GPUs too.
That's why boards have often 2 pcie slots but are certified for quad gpu CF/SLI.
They run slower because generally it is a cut down version of the two cards. Meaning either the clock speed or shaders/stream processors are not the same as two single cards. Except the 295x2 (read my front page review).
VRM's hold out just fine as there are two, one for each GPU.