So I understand that DDR means there are two memory chipsets on one physical location (one on each side of the PC board), which allows two bits of information to be accessed on one clock cycle, thus doubling the effective speed of the chipset. I also understand that dual channel means that the CPU can access two DDR sticks at once (at least in theory), thus writing or reading information from both at the same time. Accordingly, between DDR and dual channel, four bits of information can effectively be accessed in one clock cycle (at least in theory anyway).
This transfers over to GDDR on GPUs as well, except they dont call it "dual channel" or "triple channel." Instead they call it 128 bit, 256 bit, 386 bit, ect ect.
So here is my question. My understanding is that 128 bit memory on a GPU essentially means there are two GDDR chipsets running in "dual channel", similar in configuration to dual channel RAM (but using GDDR, not DDR). Is that correct? So in that case, 256 bit would mean the GPU can access four channels of GDDR memory, thus eight bits of information in one clock cycle. Is that also correct?
This transfers over to GDDR on GPUs as well, except they dont call it "dual channel" or "triple channel." Instead they call it 128 bit, 256 bit, 386 bit, ect ect.
So here is my question. My understanding is that 128 bit memory on a GPU essentially means there are two GDDR chipsets running in "dual channel", similar in configuration to dual channel RAM (but using GDDR, not DDR). Is that correct? So in that case, 256 bit would mean the GPU can access four channels of GDDR memory, thus eight bits of information in one clock cycle. Is that also correct?