• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

New nvidia drivers keep gimping my benchmark scores

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

yoadknux

Member
Joined
May 6, 2016
comparison.png

All runs are on the same build, within the same hour, same ambient temps, same load temps, DDU between driver versions, There is some variance but results are outside margin of error.

Yes, I know it's only 3% difference, but considering the overclocked gain of those cards is like ~7%, this should bother hardware enthusiasts.
 
There's a 566.45 hotfix that supposedly gets them back up, plus fixes crashes/FPS in games from Ubisoft, but good luck finding it 🤷🏻‍♂️ so far I lucked out with 566.03, seems to be the more stable of the lot.
 
Was there much "cooldown" time between runs? If earlier runs of a set tend to be better, then more time between runs could help. How many runs of each to check for variation? If you compare best vs best, does it change the outcome?

I assume Kenrou is referring to the Nvidia App reportedly reducing performance in some situations. Workaround is to disable filter functionality (regardless if it is actively used or not), or not install it at all. Latest version is now default off.
 
If my math is right, that's a 3% and 2% difference (674/465 points) between the highest and lowest runs. I'd call that negligibly more than run variance... certainly not 'gimping' it. Im with Mack... could easily be a product of heatsoak and dropping a bin or two later in the runs. I'm a hardware enthusiast and it doesn't bother me. :shrug: :chair:

To put that in perspective, We'd MAYBE lose 2/3 FPS for every hundred. That isn't the difference in setting L/M/H/Ultra or something you'd even notice in game with a frame counter (because FPS varies so much) Worth noting, if I was trying to prove this point, I'd run each three times and take an average instead of basing it off one run for each driver and calling it The Gospel. :)

EDIT: Seems like I'm on older drivers (566.14) and will upgrade now since I noticed, lol.

EDIT2: Also trying the NV App... wish me luck? LOL
 
Last edited:
If my math is right, that's a 3% and 2% difference (674/465 points) between the highest and lowest runs. I'd call that negligibly more than run variance... certainly not 'gimping' it. Im with Mack... could easily be a product of heatsoak and dropping a bin or two later in the runs. I'm a hardware enthusiast and it doesn't bother me. :shrug: :chair:

To put that in perspective, We'd MAYBE lose 2/3 FPS for every hundred. That isn't the difference in setting L/M/H/Ultra or something you'd even notice in game with a frame counter (because FPS varies so much) Worth noting, if I was trying to prove this point, I'd run each three times and take an average instead of basing it off one run for each driver and calling it The Gospel. :)

EDIT: Seems like I'm on older drivers (566.14) and will upgrade now since I noticed, lol.

EDIT2: Also trying the NV App... wish me luck? LOL
post your thoughts on the NV App, I'm curious and I hope it isn't like that garbage they called geforce experience!!!!
 
Call me old fashioned, but an overlay in game means you are just benchmarking the game.. keeping tabs on clocks and temps while you are in game just means you need to work on your cooling :)

Says the guy using the built in FPS counter in Steam.. but whatever :)
 
If my math is right, that's a 3% and 2% difference (674/465 points) between the highest and lowest runs. I'd call that negligibly more than run variance... certainly not 'gimping' it. Im with Mack... could easily be a product of heatsoak and dropping a bin or two later in the runs. I'm a hardware enthusiast and it doesn't bother me. :shrug: :chair:

To put that in perspective, We'd MAYBE lose 2/3 FPS for every hundred. That isn't the difference in setting L/M/H/Ultra or something you'd even notice in game with a frame counter (because FPS varies so much) Worth noting, if I was trying to prove this point, I'd run each three times and take an average instead of basing it off one run for each driver and calling it The Gospel. :)

EDIT: Seems like I'm on older drivers (566.14) and will upgrade now since I noticed, lol.

EDIT2: Also trying the NV App... wish me luck? LOL
well, instead of explaining to me why my methodology is wrong, just repeat the two benchmarks, one on 566.30 and one on 551.86 and prove me wrong... Will take 10 minutes :)
 
well, instead of explaining to me why my methodology is wrong, just repeat the two benchmarks, one on 566.30 and one on 551.86 and prove me wrong... Will take 10 minutes :)
I believe your results. My point is these negligible run differences seem pretty normal to me (others seem to agree...). Perhaps it still holds true with running an average for each them. Perhaps it goes away if you run the latest driver first and oldest last because of heatsoak or averages, I dont know.

Either way, im thankful you shared your findings, but I'm not concerned or surprised. I just dont think its worth sounding an alarm over. 😀
 
Last edited:
I believe your results. My point is these negligible run differences seem pretty normal to me (others seem to agree...). Perhaps it still holds true with running an average for each them. Perhaps it goes away if you run the latest driver first and oldest last because of heatsoak or averages, I dont know.

Either way, im thankful you shared your findings, but I'm not concerned or surprised. I just dont think its worth sounding an alarm over. 😀
Worth sounding an alarm, well, of course not. But this is an overclockers forum, no? We look for that 3% driver boost, 7% high voltage/power boost, 2% cooling boost, ... this is what we do with hardware... ofc it won't make a difference gaming wise
 
We do, indeed! But I'm just not bothered by the results (period), especially with the questions regarding testing people (mack) brought up. When you're dealing with such small differences, methodology is even more important. 😀
 
We do, indeed! But I'm just not bothered by the results (period), especially with the questions regarding testing people (mack) brought up. When you're dealing with such small differences, methodology is even more important. 😀
In the superposition you can clearly see Min and Max temp. Those cards thermal throttle every 10c. In all benchmarks the max temp is within 1c and min temp is within 1c except 556.13 that has 30c as min temp which makes no difference. So if course it's not a thermal thing. Is there variance between measurements - yes, is it 3% - no, I'd estimate about 1% variance
 
I've had the argument with EarthDog before about a small observed delta being bigger than the run to run variation covered, so I get it. Without seeing the raw data of all individual runs, the presented data doesn't show that, since we don't have visibility of that variation. Are the examples given the best, worst, average, or a random typical result? Personally I like to take max results over averages, since things can make it slower, there aren't things that make it faster. Just need enough runs to get a feel for how much variation there is, to know when you might be close to the max.

We also have the known problem of App potentially causing some performance reduction. It has not been stated if either of GFE or App were installed at the same time as the driver for these tests. If App was present, was the filter setting was on or not regardless if it was used? That may be a variable accounting for part of this.
 
no, I'd estimate about 1% variance
Here are my results...

First set is with 566.36 (latest from NV), then with 551.86.

TSE - 17,825 / 17,262 / 17,179 = Avg 17,422 (median is 17,262 for this data)
US - 22,065 / 22,048 / 22,126 = Avg 22,080 (median is 22,126 for this data)

TSE - 17495 / 17454 / 17494 = Avg 17,481 (median is 17,494 for this data)
US - 20927 / 20228 / 19916 = Avg 20,357 (median is 20,228 for this data)

I dont see a difference at all in TSE, but I saw nearly a 10% loss going to 551.86 US.


If we go by Mack's preference of 'best' run...

TSE - 17,825 (566.36) vs 17,495 (551.86) = The older driver was SLOWER
US - 22,126 (566.36) vs 20,927 (551.86) = The older driver was WAY slower (for some reason...)

EDIT: Notes

* This is with a watercooled 4090
* I pushed through all of these with 1 minute between runs outside of the driver swap. TSE was first followed by US. Newest driver first, old one second/last.
* Newest driver version should have filters disabled, didn't check..ran at default... EDIT - confirmed it's disabled.

EDIT2: I reran the 566.36 for a sanity check in US...

US - 21,317 / 21,149 / 20872. Average, and 'best' is still higher with the newer driver.



My point is the same as Mack's... there are several variables involved that I would want to know before confirming the assertion. Anyway, my results don't jive with yours, for whatever reason. ;)
 
Last edited:
Here are my results...

First set is with 566.36 (latest from NV), then with 551.86.

TSE - 17,825 / 17,262 / 17,179 = Avg 17,422 (median is 17,262 for this data)
US - 22,065 / 22,048 / 22,126 = Avg 22,080 (median is 22,126 for this data)

TSE - 17495 / 17454 / 17494 = Avg 17,481 (median is 17,494 for this data)
US - 20927 / 20228 / 19916 = Avg 20,357 (median is 20,228 for this data)

I dont see a difference at all in TSE, but I saw nearly a 10% loss going to 551.86 US.


If we go by Mack's preference of 'best' run...

TSE - 17,825 (566.36) vs 17,495 (551.86) = The older driver was SLOWER
US - 22,126 (566.36) vs 20,927 (551.86) = The older driver was WAY slower (for some reason...)

EDIT: Notes

* This is with a watercooled 4090
* I pushed through all of these with 1 minute between runs outside of the driver swap. TSE was first followed by US. Newest driver first, old one second/last.
* Newest driver version should have filters disabled, didn't check..ran at default... EDIT - confirmed it's disabled.

EDIT2: I reran the 566.36 for a sanity check in US...

US - 21,317 / 21,149 / 20872. Average, and 'best' is still higher with the newer driver.



My point is the same as Mack's... there are several variables involved that I would want to know before confirming the assertion. Anyway, my results don't jive with yours, for whatever reason. ;)
Thanks for trying to replicate my results!

1) Regarding inner-run variance, we see that:
- TS:E had 0.3% min-max difference between runs with the old driver, and 3.2% with the new driver. Trying to understand whether your first TS:E result is an outlier. Was it the first run you did? If so, that COULD in fact be explained by thermal downclock, but that's larger than expected.
- Superposition with the old driver had 5% min-max different between runs with the old driver, and 0.4% with the new driver.
In both cases, the first measurement was the anomaly.
2) Regarding 551.86 vs 566.36, I'm really suprised you got an overall improvement with the old driver.
3) Regarding your absolute TS:E scores, something is definetly up, doesn't make sense that you have the cooler card and get 8% lower score. My card is stock and 19.5k-ish is typical.

Did you just go with a regular install? When I switched driver versions, I used DDU followed by clean-install. I also made sure to disable g-sync for all my benchmark runs. And of course I'm not running that dreadful Geforce experience, just bare drivers.
 
Was there much "cooldown" time between runs? If earlier runs of a set tend to be better, then more time between runs could help. How many runs of each to check for variation? If you compare best vs best, does it change the outcome?

I assume Kenrou is referring to the Nvidia App reportedly reducing performance in some situations. Workaround is to disable filter functionality (regardless if it is actively used or not), or not install it at all. Latest version is now default off.
Actually, I had clean forgotten about the app (don't have it installed, so it's a non-issue), was talking about driver stability in general gaming. 566.14 and 566.36 have been somewhat problematic according to reddit testing, but 566.45 fixed some of it. Keep in mind, I am also upgrading from 551.86 like ED, and I'm happy with 566.03 so far, no issues and feels smoother in every game I tried, which speaks to higher 1% and 0.1%.

566.45 - "This hotfix addresses the following issues:
[Indiana Jones and the Great Circle ] Some users may experience intermittent micro-stuttering [5015165]
Improved stability for UBISoft games using the Snowdrop engine [4914325]"


Has anyone tried using the studio drivers? Supposedly somewhat lower FPS, but much fewer issues (to those who have them).
 
I use Studio on one system that is now my main video editing system. Can't say I noticed any difference on the rare occasion I fire up a game on there, but I never benchmarked it vs game ready driver. I'd expect the same version number to perform identically, but Studio doesn't get the interim game-only update releases.

Been on 566.36 probably since it came out. No problems for me, but I don't play Indy or any Ubisoft game.
 
3) Regarding your absolute TS:E scores, something is definetly up
That's my fault... i have 13900k but HT is disabled. Since TSE uses cpu score in overall score, im notably lower. Forgot to mention that in my notes.

Did you just go with a regular install? When I switched driver versions, I used DDU followed by clean-install. I also made sure to disable g-sync for all my benchmark runs. And of course I'm not running that dreadful Geforce experience, just bare drivers.
I used ddu between them.

2) Regarding 551.86 vs 566.36, I'm really suprised you got an overall improvement with the old driver.
Hopefully you can now see how many variables are involved in getting down to brass tacks. Not saying my testing is the best or right or yours is wrong (you still havant addressed mack's concerns/questions about how you ran it, exactly)... but you can see how so many things can result in different conclusions.

Knowing this already, is why I said out of the gate the nearly negligible differences (among all results now) just don't 'bother' me. Like the gpu connector, there are bigger fish to fry, for me. 😀
 
That's my fault... i have 13900k but HT is disabled. Since TSE uses cpu score in overall score, im notably lower. Forgot to mention that in my notes.


I used ddu between them.


Hopefully you can now see how many variables are involved in getting down to brass tacks. Not saying my testing is the best or right or yours is wrong (you still havant addressed mack's concerns/questions about how you ran it, exactly)... but you can see how so many things can result in different conclusions.

Knowing this already, is why I said out of the gate the nearly negligible differences (among all results now) just don't 'bother' me. Like the gpu connector, there are bigger fish to fry, for me. 😀
Oh you posted the overall score and not the graphics score. I understand
 
Tried 566.14 and 566.36 and both gave me sudden crashes and IRQL_NOT_LESS_OR_EQUAL blue screens randomly in games (WoW, Cyberpunk) and when alt+tabbing with Chrome/YouTube (yes I used DDU), back to 566.03 and they stopped. Google says it's the driver trying to access memory it's not supposed to, maybe the new memory fallback option acting up?
 
Back