- Joined
- Mar 7, 2008
For those that don't keep up on the rumours, the standard 4070 has 21 Gbps GDDR6X. Due to shortages, that is in the process of being switched over to 20 Gbps GRRD6. I'm seeing some stupid takes elsewhere. Why do I even look? Some places are losing their heads over this "massive" performance nerf. How could nvidia do this? Slash the prices to compensate!
I decided to run some numbers myself. I don't have one of the new models, nor do I have any intent of getting one, but it seems easy enough to turn down the memory speed to match. Since I did testing for KB5041587 recently, I was already half way there before I even started! I only needed to rerun the benchmarks at the lower memory speed. The drop in memory speed is 4.8%, so if all else were equal, if a workload was purely bandwidth limited, we could expect to see 4.8% drop in performance.
Steel Nomad does whatever it does. The rest I run at 4k output since that is my display.
3DMark Steel Nomad - 3819 vs 3763 (-1.5%)
Black Myth Wukong benchmark - 4k very high, motion blur off, 50% scaling, RT medium - 61 vs 60 fps (-1.6%)
SOTTR - 4k high - 122 vs 120 fps (-1.6%)
Watch Dogs Legion - 4k Ultra, RT off - 51 vs 50 fps (-2.0%)
FFXIV DT benchmark - 4k max - 70.75633 vs 70.63608 (-0.2%)
There we are. Up to <2% drop, which no one is likely to ever notice outside of benchmarks. I should add, I'm no benchmark monkey. I ran each of these once. They looked in the ball park of the old numbers. So no repeatability, no double checking. Also the middle three only report to nearest whole fps, so that's some quantisation error there. Dropping the speed of GDDR6X might not exactly replicate GDDR6 either, since there might be something in timings, or power usage. If GDDR6 uses less, then possibly the core could boost a bit more. I'll leave that to the usual suspects after the GDDR6 cards hit the shelves.
I decided to run some numbers myself. I don't have one of the new models, nor do I have any intent of getting one, but it seems easy enough to turn down the memory speed to match. Since I did testing for KB5041587 recently, I was already half way there before I even started! I only needed to rerun the benchmarks at the lower memory speed. The drop in memory speed is 4.8%, so if all else were equal, if a workload was purely bandwidth limited, we could expect to see 4.8% drop in performance.
Steel Nomad does whatever it does. The rest I run at 4k output since that is my display.
3DMark Steel Nomad - 3819 vs 3763 (-1.5%)
Black Myth Wukong benchmark - 4k very high, motion blur off, 50% scaling, RT medium - 61 vs 60 fps (-1.6%)
SOTTR - 4k high - 122 vs 120 fps (-1.6%)
Watch Dogs Legion - 4k Ultra, RT off - 51 vs 50 fps (-2.0%)
FFXIV DT benchmark - 4k max - 70.75633 vs 70.63608 (-0.2%)
There we are. Up to <2% drop, which no one is likely to ever notice outside of benchmarks. I should add, I'm no benchmark monkey. I ran each of these once. They looked in the ball park of the old numbers. So no repeatability, no double checking. Also the middle three only report to nearest whole fps, so that's some quantisation error there. Dropping the speed of GDDR6X might not exactly replicate GDDR6 either, since there might be something in timings, or power usage. If GDDR6 uses less, then possibly the core could boost a bit more. I'll leave that to the usual suspects after the GDDR6 cards hit the shelves.