• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Nvidia's GTX 980 color compression scheme: lossy?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

magellan

Member
Joined
Jul 20, 2002
Title says it all, is Nvidia's new hardware color compression scheme on the GTX 980 a lossy compression?
Considering some say the performance of the GTX 980 depends on this compression (because of the lack
of memory bandwidth), it'll be interesting to see if it affects IQ or causes artifacting.
 
Its clear (to me) that some IQ testing needs to be done to answer your question (have you searched and checked?). If there is any I highly doubt it will be noticeable to the human eye. Here is an NVIDIA whitepaper that may help. Search for COMPRESSION.
http://international.download.nvidi...nal/pdfs/GeForce_GTX_980_Whitepaper_FINAL.PDF

Here is a snip that addresses vram bandwidth:
To
improve performance in high AA/high resolution gaming scenarios, we doubled the number of ROPs from 32 to 64. Again, thanks to the added benefit of higher clocks, pixel fill-rate is actually more than double that of GTX 680: 72 Gpixels/sec for GTX 980 versus 32.2 Gpixels/sec for GTX 680.
The memory subsystem has also been significantly revamped. GTX 980’s memory clock is over 15% higher than GTX 680, and GM204’s cache is larger and more efficient than Kepler’s design, reducing the number of memory requests that have to be made to DRAM. Improvements in our implementation of memory compression provide a further benefit in reducing DRAM traffic—effectively amplifying the raw DRAM bandwidth in the system.


If true, I would imagine its effects only at 4K or greater resolutions. Even though it is a 256bit bus, the memory is at 7000Mhz. AMD is 512bit and slower. So the speed of the ram makes up for the bandwidth.

I highly doubt it would cause artifacting... that makes no sense to me.


EDIT: Lossless. Amazing, t3h google (p 10 of the whitepaper I linked above):
To reduce DRAM bandwidth demands, NVIDIA GPUs make use of lossless compression techniques as
data is written out to memory.
 
Is the GTX 980 memory running at twice the frequency of AMD's r290x? I'd figure it would take at least that to match the fact the R290x has a 512-bit memory data bus vs. 256-bit for GTX 980.
I wonder how much frame time latency doing lossless compression/de-compression adds? It can't be free.
 
Is the GTX 980 memory running at twice the frequency of AMD's r290x? I'd figure it would take at least that to match the fact the R290x has a 512-bit memory data bus vs. 256-bit for GTX 980.
I wonder how much frame time latency doing lossless compression/de-compression adds? It can't be free.

Or, because of compression, it could actually have LESS latency :eek:

Also, it doesn't need to run at twice the speed because of the compression.

Do you know how Google works?
 
Reference is 7000MHz effective on 980. Reference 290x is 5000MHz. So not exactly half.

I'm not sure of the details as far as latency goes, but you can look at any review that checks it and see if there is a difference. Being a single card and single cards not having latency issues (like SLI/CFx), I highly doubt this would be a problem.

EDIT: Found one for you - http://techreport.com/review/27067/nvidia-geforce-gtx-980-and-970-graphics-cards-reviewed/7
 
Last edited:
The GTX 980 had the lowest frame latencies. That is impressive -- especially if it's compressing/de-compressing the color data, which can't
be done in zero time (and what does google have to do w/color compression again?).
 
Back