• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

HBM Information

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
i take it that is about HBM 1.0, while i dont know what the next step is in 2.0. there was the rumblings fron NV about the new telsa card, p100? being HBM 2.0. does this mean that its faster clocked HBM? since in the slide it shows 1-2gbps or 500mhz-1000mhz, what i think is going to be interesting. is when they can use HMB coupled with GDDR5X@10gbps, even with not using that many channles it will allow for more then enough bw. i see a future modular design where based on the cores they either use or not use another channel for more bw/more dram. with each channel being configurable in a way where they can use different size's of GDDR5x (granted i dont know what sizes they off for this new GDDR). would make future NV or even ATI cards more cost effective, where defects in X number of cores means cutting or disabling them. granted once manufacting the dies gets better yeilds would improve so i guess this would then mean a different die design in a way. To where they could choose not to add more cores/modules in the GPU during manufacturing and not worry about things not working (in a way since i dont know how they do gpu manufacturing now). what i see if not already being done is building the gpu more in modules where each one has X number of cores. can be coupled together in a near infinite connection, more so based on the bw of the interconnect/connection to the other modules to bigger/better gpus. say for example one module has 64 cuda cores and you just add more modules to the die to where you either max out the bw of the connection/interconnect to make you high end gpu. this ties into the HBM and channels being able to swap out say even different ram modules so more cost effective GDDR might be used vs GDDR5x. allowing a faster/less RD time in low end to high end gpus. If HMB it self was modular to allow another one to connect to the first one to allow more bw for the gpus if in a dual GPU/die situation or if the die/cores are more configured GPGPU. that then would allow much more GDDR and if need more bw, so for instance you could say the card need X bw but you work needs lots of ram. you could in that case then use the cheaper GDDR5 high desinty IC's and pack them on say on 16 channels (being both HBMs are interconnected to each other for ram pool sharing.) say you then in this case get the chance to have a card that then gets up to say 64gb or higher. guessing at numbers since i dont know densinty of GDDR5 ic's ect. you should get the idea of where this could go if not already going that direction.
 
Back