• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

12700k quick n' dirty undervolting/downclocking

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
After almost 7 hours of 36x testing, combined with all sorts of other random activities on the system, I figured it was stable enough at -0.110 V.

In trying to come up with what I do next, I figured the best thing would be to gain some insight into exactly how what I now know as "Speed Shift" behaves. More specifically, at any given setting I plop in, I want to know exactly what frequency multipliers are jumped to at all parts of the utilization spectrum.

The reason for this is because if I run into any problems with instability at lower usage/idle, knowing which multipliers are used might help me suss out if any specific frequency/voltage combo is particularly problematic, such as potentially prime testing that frequency. If this method does indeed end up reliably getting crashes at certain frequency/voltage steps on an adaptive curve at a given voltage offset setting, it could be used to fine tune the curve and maybe give it a little bump at certain frequency steps, instead of just backing off the entire thing.

It may sound like too much effort for too little reward, and it might be, which is why I don't plan on doing anything more than exploratory messing around with this, unless I run into actual issues with idle/mid-range/bursty load situations. And if all it takes to fix it is just 1 or 2 voltage bumps, maybe it isn't worth it. However, if the lower load situation instabilities that may occur end up becoming a much bigger problem, such as needing 4+ voltage bumps, it may become profitable to dig in deeper. The worst I can do is waste my time while learning a lot more about the CPUs for later.

I was hoping to find a piece of software that would log every single frequency multiplier change in a slice of time. While I haven't yet been able to find that, the closest thing I came to was MSI Afterburner.

MSI Afterburner's monitoring is far more CPU efficient than Hwinfo and other programs. Even at 100ms update rate, its CPU usage remains low (if I underclock the video card too low, it actually doesn't animate the line graphs smoothly ^_^).

2022-04-15_004432.png

Now, these charts can't be conveniently paused, nor can they be zoomed/scrolled. A horrible way to dig into the data. However, they can be logged. With what's displayed here, even with the high update rate, logging did not hammer the hard drive. It only did about 6 KB/s write.

2022-04-15_003750.png

So it appears I have what might end up being useful as a way to map out the frequency multipliers that actually are used, even if it's a terribly ghetto way to do it. I find it surprising nobody has already made software to analyze this more easily (maybe the CPU manufacturers have proprietary in-house stuff for testing/development)... or does such software exist?

Unfortunately Afterburner doesn't track core VID or Vcore. However, I can try doing logging with Hwinfo or a new program I found called Quick CPU.

One potential complication:

The monitoring software, particularly at high polling rates, will interfere and prevent many/all cores from falling to minimum frequency/power states, or make it so the lowest frequency multipliers occur much less often, making it more difficult to map them out. I think just setting a very low poll rate and letting it log for many hours would be the easiest way to mitigate/eliminate this problem.

So it's nice to know that these methods are at my disposal in case I need them.

Another potentially profitable piece of software is Quick CPU. It seems to have even more advanced settings than XTU or the MSI Bios, which may end up being great to experiment with for further wattage saving. And it's just overall very handy, and may even rival XTU as a live-tweaking tool.

Curse the great blinding whiteness of it all!

2022-04-15_003413.png

2022-04-15_003329.png


I feel like I'm wasting a huge opportunity not having anything specific I want to test, particularly with high power stress tests, while I sleep and a bunch of cold air can get in through the crack in the window, being that it's freezing outside. However, I decided to flip the script a bit and instead test the -0.110 V with 47x max turbo p-core just sitting doing almost nothing for several hours, and log the frequencies at a 5 second interval to minimize CPU usage. But I'll leave a couple things open to maybe let it spike up from time to time. I'm sure with various scheduled background BS spikes of various power will happen once in a while. The main purpose of the log will be to build a reference for later.

Aside from that, I'm also considering the idea of seeing what kind of overclock could be achieved without giving up too much undervolting. I figure with aggressive power-saving/auto-downclocking idle cores, increasing the maximum turbo frequency or whatever for the p-cores, even with a modest voltage bump upward compared to the max stable undervolt I've found so far, would likely allow for greater power if it's needed, while still overall using less power nearly all the time.

I've also reconsidered the idea of maintaining an underclock. If I need to process certain things (and not just add extra FPS to games), running at 36x instead of 47x+ is foolish, because it will just take longer to do the same processing anyway, using the same overall amount of power, as long as none of my applications/uses end up being power viruses that will use everything they can get (i doubt they will). So it's better to just make everything snappier and run them at higher frequency, so the spikes will be shorter and utilization % lower. Overall power usage should not increase much compared to underclocking. Underclocking only seems to be an amazing power saver when you're doing artificial load tests. Only undervolting itself is a reliable way to save power without increasing suffering.
 
I've also reconsidered the idea of maintaining an underclock. If I need to process certain things (and not just add extra FPS to games), running at 36x instead of 47x+ is foolish, because it will just take longer to do the same processing anyway, using the same overall amount of power
Have you tested power savings/performance difference for stock v. undervolt? If I'm trying to figure out what you're chewing on above, that's the data set(s) I'd need. ;)

As far as overclocking and undervolting, there may be some headroom there. My 12900K, on every board, voltage goes up well past the 1.25V I need to run 5.1/4.1 on my chip to around 1.35V. I've got a slightly above average sample, but, you get the idea.
 
It's obvious that all other things being equal an undervolt will lower wattage output. But this may be more complex to measure when it comes to relative idle states jumping around lower multipliers where the difference may be much harder to measure and less obvious than when there is some sort of consistent load. It also becomes more complex to measure when all things aren't equal, such as changing frequency multiplier settings and, as I'm seeing, having unintended consequences when it comes to which frequencies the processor will jump to, which I will introduce in a bit.

There was no loss of stability all night as I slept with the 5 second logging in effect. It sure gets damn cold in here without any stress tests running.:eek: About 7 minutes into watching a fullscreen youtube video it froze (of course). It was different from the load test crashes, in which there was a BSOD on the main desktop screen while the other screens freeze and Windows pretends it's collecting data about the crash but never finishes. In this case, it was just a total freeze. Everything was stuck in place including on the main monitor. Just to check I flipped on all the monitors. The 4 monitors connected to the Nvidia card had their image stuck in place. The 2 monitors attached to the IGPU had no image at all, and their lights indicated there was no signal coming in.

Upon going over the Afterburner log comparing what was going on all night compared to the readings from stock settings after rebooting, I noticed one interesting discrepancy in the frequencies. For some reason, when I set all the P-cores to max out at 47x regardless of how many were active, the lowest multiplier it ever went down to was 8x. After reboot, I saw many examples of it hitting 7x and even 5x. It never hit 6x.

This is interesting to see and I wonder why that happens. I'm sure anyone who properly learned all the ins and outs of how this works already knows the answer. A full analysis of the differences between these two configurations from the log file will take some time. I might want to consider pasting it into an excel file and running formulas on the columns to spit out some rudimentary statistics so it would be easier to instantly see for example, how many occurrences of each frequency happened, % occurence, min/max, average, etc. Not doing this would make any further actions in this direction pointlessly inefficient and lacking in information. To be honest, even if I end up giving up putting more effort into this campaign and settle on whatever setting, my curiosity about what these excel results would be may end up getting the best of me anyway. There's nothing quite like the satisfaction of pasting in huge amounts of numbers and then having your fully custom readout on top automatically spit out all the statistics you could ask for, formatted just so.

Stock, which goes down to 5x:
1650033792797.png

What I was testing had every single one of those multipliers at 47x. Great for load testing. Evidently very poor for adaptive frequency testing.

All my problems may be washed away magically by 1 or 2 voltage bumps. My experience with this type of response to freezes is that usually what happens is, after 1 or 2 bumps you might go days, weeks, months before another freeze. And then when it happens you just bump it up once more. And maybe months/years go by until the next one if you're lucky. Then at some point you never do it again, until your PSU/mobo ages enough to start making the problem worse ^_^ If that is all it takes to wash the pain of instability away, maybe that is fine. That would still be a pretty chunky undervolt. However, it's also possible that taking this lazy approach may lead to many more voltage bumps, undoing a huge amount of power savings, which, just a few days of extra work/insight/learning could have unlocked the key to.

Regardless of what happens on that front, I still need to do some power usage testing for relatively low draw usage scenarios to compare what happens if I leave the frequency settings as is, vs locking the maximum at 47x. If I end up not messing with the frequencies at all, I will need to load test the 48/49/50x somehow, and figure out exactly which cores those frequencies tend to go to. (quick answer, cheating: core 5/6 or 4/5 depending on whether you like to start with 0 or 1, are the only columns in the log file showing a 5 GHz frequency after reboot, which are marked as the preferred cores)

1650034679853.png

Ok, now I'm learning something. So the Per-Core tuning is showing what the maximum multiplier would be for each specific core, while the Active-Core tuning governs the maximum frequency allowed depending on number of active cores. I never paid attention to the per-core tuning. To be honest, I'm not sure I want to until I can easily analyze this stuff with excel, because it would just show so much useful information so conveniently that would take forever to parse out just manually scanning through a log file, and the information that could be gleaned from that, IMO, is just too good to pass up. It would make it much quicker/easier to figure out how Speedshift behaves at different frequency configurations. This would also make it much easier to predict relative power usage of dynamic idle/low-load states, and, of course, these predictions could then be measured, either with more logging, or by carefully comparing Afterburner graphs for peaks, troughs and average ranges. Hell, it would make it easier to measure/assess any usage scenario from a numbers perspective.

At the moment I have the log data in excel but I'm having a ***** of a time figuring out a way to list distinct frequency values from a given column. Min/max/avg of course is easy. Evidently Office 365 makes this super easy, but in older versions of excel it's a bit trickier to do.

Until I figure that out, I'm going to try messing around with stock frequency settings with the same undervolt, see how it behaves to some degree, see if it freezes. Might need a bit of a break from this, as I wasn't expecting having to bang my head against learning more excel again.

The only thing keeping me from having an easy analysis tool is whether or not I'll be able to figure out this excel bullshit. Traumatic flashbacks from when I was working on some other spreadsheets
 
So it's been a few days and ever since I switched from all cores maxing out at 47x no matter what, to the default boost which does 50x 1 core, 49x, etc. down to 47 with many p-cores active, the -0.110 V offset has not caused a single crash/freeze. I've mostly been using it just naturally, and there were many times where things were mostly idle for a long time. I'm now testing this at 36x max speed for all P-cores regardless of how many are active.

I should issue a correction regarding my statement about not wanting to underclock. For some reason my brain skipped a beat and I forgot that going from 47x to 36x, although it won't reduce the number of cycles used outside of full load, it still reduces the actual voltage. Therefore, if the same amount of work is being done, in terms of cycles used, the power usage is less due to the reduced voltage. This is not a revelation, moreso its a statement about how well my brain has been working as of late.

I did a little testing with my currently played game, Hellgate: London, and found that the -0.110 V undervolt offset @ stock frequencies was reducing CPU core power usage by 22%. If I underclocked that same undervolt to 3.6 GHz, it reduced the power usage by another 18.6%. These are legit numbers because in both cases, I downclocked the GPU so much that the 1 core being used by the game was never close to being saturated, which means that the reduction in power usage with the underclock was not a "power virus" reduction, but a real reduction based only on a limited load. The same work was being done regardless of frequency since it never fully saturated the core. Going from 100% 4.9 GHz to 100% 3.6 GHz is cheating.

Therefore my conclusion is that for max wattage reduction, both the undervolt and underclock are very useful. The only difference being, that if I notice any slowdowns because of the underclock, I will gladly roll it back to default. However, I'm considering just rolling 3.6 GHz 24/7 until I notice performance issues.

100% stock settings with GPU heavily underclocked
2022-04-18_112527_Hellgate_620MHzGPU_StockVoltage_StockClocks.png

Stock frequency settings, -0.110 V offset
2022-04-18_112123_Hellgate_620MHzGPU_-0.110V_stockclocks.png

-0.110 V offset with 3.6 GHz underclock
2022-04-18_120258_Hellgate_620MHzGPU_-0.110V_36x.png


I haven't felt compelled to really dig into this stuff further anymore, simply because it's so stable right now (knock on wood).

I will do some benchmarks later to compare stock frequency settings to 36x, but @EarthDog aside from Cinebench R23, what benchmarks would you like to see?
 
any that show performance differences... Games... there's a Blender benchmark that's quick to DL and run... Super Pi.

Lot's of benchmarks out there to give us all an idea. :)
 
Back