• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Dual channel vs 128-bit DDR, what is the difference?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

SPL Tech

Member
Joined
Nov 28, 2006
So I understand that DDR means there are two memory chipsets on one physical location (one on each side of the PC board), which allows two bits of information to be accessed on one clock cycle, thus doubling the effective speed of the chipset. I also understand that dual channel means that the CPU can access two DDR sticks at once (at least in theory), thus writing or reading information from both at the same time. Accordingly, between DDR and dual channel, four bits of information can effectively be accessed in one clock cycle (at least in theory anyway).

This transfers over to GDDR on GPUs as well, except they dont call it "dual channel" or "triple channel." Instead they call it 128 bit, 256 bit, 386 bit, ect ect.

So here is my question. My understanding is that 128 bit memory on a GPU essentially means there are two GDDR chipsets running in "dual channel", similar in configuration to dual channel RAM (but using GDDR, not DDR). Is that correct? So in that case, 256 bit would mean the GPU can access four channels of GDDR memory, thus eight bits of information in one clock cycle. Is that also correct?
 
Wider bus means it can transfer more data in a single clock cycle, nothing else. Because of architecture on the motherboards, it wasn't possible to make it more than 64 bit so idea to boost bandwidth was to add channels. So you can use multiple channels to transfer more data in each clock cycle. 128 bit is not dual channel, it's simply one memory controller which at given memory specification can have 128 ( 256 or more ) bit bus.

It's not the same but if you think about graphics card specification and memory chips config then you can compare it to multiple channels. Like the same memory chips are now used on 128, 256, 386 and 512 bit memory configurations. So lower versions of graphics card like GTX960 have 128bit based on the same memory chips as you can find in GTX980Ti ( or even AMD cards ). Based only on that info you can assume that additional "channels" are boosting max bandwidth of the higher graphics card and not the memory itself.
Additionally current graphics memory is more like QDR - effective clock is 4x higher than base clock, in DDR it's 2x. Some architecture differences.
 
So if the memory controller is 128 bit, does that mean it can access 128 bits of data in one clock cycle? I always through only one operation can be performed in one clock cycle and that is it.
 
Back