• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

RTX 3060 Reducing voltage/frequency as power/load increase beyond approx 60%

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.


New Member
May 16, 2024
Hello all!
Trying to get some temp/power/performance benefit out of a MSI GF65 Thin (RTX 3060, i7-10750H). Running stock settings for baseline in OCCT stress test (GPU Adaptive) and seeing, via HWiNFO64:

1. GPU Frequency/volts increasing until max reached about the 50% load point, where it is maxing out voltage (0.9) and Frequency (1785 MHz). Here, power is approx. 67 W
2. As the test increases GPU load from this pint, power maxes out at 73W. For the remainder of the test, each successive load step produces a reduced GPU voltage, with the Frequency reduction
3. Once full load is reached, the Frequency is down to 1370 MHz
4. According to HWiNFO64, the only limiting factor is power, with no other GPU Performance limiters tripping. No thermal limit reached (max of 75deg on what is reported to be an 87deg limit).

Anyone have an idea of why the voltage is being cut back? I can't find any current draw parameters in HWiNFO. MSI Afterburner was used to verify that the voltage vs. freq curve is being adhered to.

Thanks in advance!
Try running something that ISN'T a stress test... like 3DMark tests, etc... and see how it behaves.
Thanks for the idea. I had also been running Time Spy and saw the same behavior. Curiously, the GPU test in Cinebench 2024 did NOT exhibit this, ramping up to max MHz and staying there. Could I have a power delivery problem, perhaps?
Try the 2nd tab of GPU-Z for monitoring. As it gives a visual time history it might help see what is going on too.
Thanks for the idea. I had also been running Time Spy and saw the same behavior. Curiously, the GPU test in Cinebench 2024 did NOT exhibit this, ramping up to max MHz and staying there. Could I have a power delivery problem, perhaps?
Gpus don't 'slow down' due to lack of power. Generally, it either works or doesn't.

Chances are, the stress test is slamming off the power limit of the card which causes it to lower speeds. It's how modern gpus work.

Like mack said, maybe take a look at the 2nd tab if gpuz so you see a history... but my money is on that stress test being 'too'stressful for what you want to see. ;)

EDIT: Please post up some screens shots so we can see as well.
Last edited:
I'll try to get that data ASAP. In the meantime, I also remembered that I am seeing this same behavior in the Cyberpunk 2077 benchmark. Thanks all!
Here is a run on OCCT with a zipped .csv log of the data, along with screenshots of the test setup and the V/Freq. curve as seen from MSI Afterburner.


  • Afterburner Volt-Freq Curve.png
    Afterburner Volt-Freq Curve.png
    33.3 KB · Views: 1
  • OCCT Setup.png
    OCCT Setup.png
    26.3 KB · Views: 1
  • OCCT 3D Adaptive Test 1 5-17-2024 Stock GPU.zip
    230.4 KB · Views: 1
Just images of the 2nd/Sensor tab of GPUz is all we need. I'm not sifting through another spreadsheet unless I have to, lol. I'd imagine if you look at Performance Limit - Power there will be a YES there because it's banging off the power limit.

All you need to see there is the reason it's capping the power (Perfcap reason). But like I said, OCCT is a stress test, and I'm not at all surprised to see it throttle clocks back during stress testing. That's normal because of the extreme loads it puts on the GPU.

As far as this happening in games, the max boost is dependent on temperatures and power limits (among other things). So at some point the card will settle with clocks and voltage that fit under the limit. You're in a laptop so thermals can be more of an issue than with a full desktop card. Even without power limits, once you get into the 60's, it will start to drop boost bins too.

So far though, I don't see an issue with anything. :)
Last edited:
Yes, the limit being smashed into is power. Forgive my novice ignorance. I'm guessing that if I were able to monitor current to the GPU, that I'd be seeing "I" changing to satisfy P=VI and not that resistance is changing. Either way, attached is a pic of the 2nd page of GPUz as requested.


  • GPU-Z 2nd TAB image OCCT GPU Adaptive Run 2 5-17-2024.gif
    GPU-Z 2nd TAB image OCCT GPU Adaptive Run 2 5-17-2024.gif
    286.1 KB · Views: 4
Yep... you can see once it hits the 7x watts as the stress test load increases, the clocks and voltage start to drop until it doesn't 'slam' into the power limit. Games should behave similarly depending on a few factors. But assuming it's hitting the power limit trying to spit out frames (some games are lighter than others and may not reach limits), it will lower the boost clocks to compensate.

If you look at the first page on GPUz, it will show you what the base clock is and the boost clock is. Will you post that picture up, please? I forgot to ask earlier. But my point is that, in games (not stress tests) you should be around or above that boost clock value in GPUz. :)
Super-grateful for your patience and help. See attached.


  • GPU-Z Graphics Card Tab.gif
    GPU-Z Graphics Card Tab.gif
    23 KB · Views: 1
So, the base clock is cut off in the image, but, in taking a quick glance at your spreadsheet, the clocks appear to eventually be right at or just below, the boost clock that's listed there, correct? Seems like it's trying to hold a higher clock, but trickles down to ~1,402 MHz - your rated boost clock. But again, the stress test is just beating it into submission, lol.

If you capture game data, I'd guess it should behave the same or run even higher clocks as it's not a stress test beating it down. ;)
Last edited:
Sorry about that, trying to do about 5 things at once today. Below is full shot showing the base clock as 1050MHz (I've started curve editing while chatting). Cheers, and I think I'm good from here, as I feel like I understand and don't want to monopolize your time. Last question is if I should shoot for no errors when stress testing OCCT.

Thanks again.


  • Screenshot 2024-05-17 113809.png
    Screenshot 2024-05-17 113809.png
    47 KB · Views: 1
This is why we're here, you're not monopolizing anything. :)

I wouldn't (others do!) use a stress testing application, period. They tend not to test the actual clocks you run at in games and is just there to 'max' things out.

To test overclocks/undervolts, I'd loop a 3DMark test Time Spy/Fire Strike Extreme, etc. If you can let that loop for a couple hours, your gaming sessions should be fine!
Perfect. Understood and well taken. I am using for games almost exclusively so it makes sense to test in the desired use environment. I work in as an engineer in testing (heavy duty diesels) so tinkering and testing are kind of my bag of geek. I have some basic knowledge and am building on that, but still have some basic system understanding to develop. Thanks for ushering that along, and I'll reach out if I get stuck again.