• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Overclocking 6600K

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Still, I apologize for the confusion and just want you and everyone to know that I appreciate the help. :)

My motherboard is the one in my sig, the GA-Z170X Gaming 7 on the latest BIOS (2018 or so) but as you can tell, I have very little experience with OC'ing and I am very cautious about it. I've not personally burned a CPU but I have an old friend who fried his AMD chip maybe about 18-20 years ago. Since he's 1000x more of a computer geek than I am and yet still made that mistake, I try to be a 1000x more cautious.

Current settings is 1.39v, LLC on High, and 4.5GHz OC. I've just completed a 150-run IBT Standard test which took 29 minutes and no issues. It would previously fail at lower vcores, however, I noticed that at 1.38v, vdroop was 1.368v and IBT would fail after 8-10 minutes. At 1.39v, vdroop was still at 1.368v according to HWiNFO but IBT was able to do 150 runs with no issues. Any ideas why this is the case? BTW, I ran IBT 2x at the lower voltages just to make sure it was not a freak fail and it would always crash at the 8-10 minute mark so the failures weren't a one-off. At 1.39v, it passed IBT first time. Temps were 70-83C max, 56-64 avg, so just bumping that 80C threshold but not being there for long, so I guess that means nowhere near 80C for regular gaming use.

I'm going to run a longer Aida 64 test now, maybe just 2 hours though, but I'll run a IBT Very High test first for 30 minutes. I figure if it'll pass IBT tests for 30 minutes, it should have no issues with Aida?
 
The overclocking community is moving away from heater testers like IBT and Prime95. In fact, Asus has warned about the danger of using stress tests like Prime95 that draw so much power and produce so much heat when used with modern CPUs. Not sure what their current methodology is because they are not forthcoming with the info, but up until recently Siliconlottery.com used a 1 hr. run of Realbench to certify their offerings. On the other hand, I would not feel confident that IBT used at the "Standard" stress level will really demonstrate stability of an overclock.

AIDA64 and Realbench will stress test in a way that is more like everyday real life computing without drawing excessive power and making excessive heat. You will not do damage to the system by running these for several hours and it would give you peace of mind that you are indeed stable.

One other thought. Does your motherboard have an AVX offset control? If so, you might benefit from using it to get a higher overclock for running software that does not use AVX instruction sets. Having an AVX offset will cause the CPU to downclock by the offset amount when it encounters AVX instruction sets. AVX instructions lay a lot of stress on a CPU. I believe all the stress testers you are using do have some AVX component. An older version of P95 (e.g. 26.6 would be pre AVX). Some stress testers that do have AVX instructions will have a heavier dose of it than others.
 
Last edited:
Interesting. Back in the i5 750 time, it was all about IBT and Prime95 which I would still be using had I not watched the Linus video talking about that very issue regarding Prime95.

So what is the current "best" or recommended method of testing OC stability? Realbench and Aida 64 over 5+ hours? IIRC, the idea of IBT and P95 was to quickly heat/stress the CPU so that any issues that would appear 2-3 hours down the line would instead appear in 20-30 minutes. Is there an intense, short stress test program that I can run to verify immediate stability then run the longer Aida/Realbench later?

I just tested on IBT Very High and it failed at 11 minutes twice, max temp was 85C at one core and high 70s on the rest, vdroop was 1.368v still. I wonder if increasing voltage to 1.395v would help? Or should I simply abandon IBT VH and use Aida/Realbench? I do believe my BIOS has AVX control, I'll reboot to verify.
 
As Mr. Scott said in post #2 of this thread, the real proof of the pudding is no instability over time when running the compute tasks you do. The stress tests are only short cuts to help get there. As I have said already in an earlier post, I have settled on at least 2 hr. of passing the Realbench stress test using 8mb of RAM. Four hours would be better. Take a break from this and start a four hour Realbench stress test when you go to bed tonight. If you wake up and find you passed then either bump the multiplier up and retest or keep the multiplier the same while lowering the vcore a bit before retesting. If you failed the test either raise the vcore or lower the multiplier. The goal is to find the lowest vcore that will give you enough stability to pass 2-4 hours of Realbench at a given core clock multiplier level.

Don't obsess over not being able to pass a certain stress test or all of them, especially out dated ones like IBT. Find one that simulates more realistic everyday computing tasks but adds enough more stress to reasonably ensure stability in real life use. I think AIDA64 and Realbench are in that category.
 
Thanks for clarifying re: stress tests. I've since ran 2 more IBTs and both passed with roughly the same temps and vdroop as the tests that failed.

Don't obsess over not being able to pass a certain stress test or all of them, especially out dated ones like IBT.
Didn't realize until not that IBT was also considered outdated. I know it's not been updated since 2012 or so but I just thought it worked fine and didn't need to be fixed/updated. I'll be moving on to Aida and Realbench now, thanks!
 
I think the power draw and heat produced by many core/many thread modern CPUs have made IBT obsolete, except for very limited, rough-in kind of testing. These special purpose tools are often a pet project of some programmer who comes to the realization that he's not making enough money on it to justify continued development, get's bored with it, or that technology changes have passed it by such as new architectures or new instruction sets.
 
As I said, not only am I new to overclocking but I'm also out of touch, using guides that are at least 3 years old so the up-to-date advice is greatly appreciated!

As per Mr.Scott's post#2 in this thread, 4.5GHz seems stable for the daily use stress on my PC and at 1.39v, I think I'm happy with those numbers. For some reason, I didn't feel like I liked 4.4GHz and really wanted 4.5GHz instead ;) I'll be doing the longer stress tests tonight but for now, I'll be jumping from OC and non-OC and will be looking at my gaming numbers.

I'm quite surprised that temps have not been an issue for me regarding this OC, but I guess that's mostly due to losing the silicon lottery with this chip? Might be time to move on and hope for a better result.
 
Your temps are good because you have an i5 and not an i7 and you have top end air cooling. Might be a different story if you were running an i7 with 4 cores and 8 threads.

Believe me, I sympathize with your timidity in this overclocking process. I think everyone starts there. With experience that will disappear. Modern CPUs and motherboards have multiple levels of thermal protection that were not present years ago when people were litteraly exploding CPUs. The components are a lot tougher these days with solid capacitors, etc. The real danger these days is not frying something on the moment so much as it is causing degradation over time by over-volting.
 
Hehehe.... "top end air cooling".... you should see (hear) the discussions I have with some of my "expert" friends that say AIOs are the bomb and can only be beat by custom loops. One of my close friends was bragging to me about his non-K CPU (8600, IIRC) and how awesome his pre-built PC was with a 120mm single fan radiator. He was looking forward to overclocking his CPU and he thinks he can push it quite far since he has "watercooling installed."

The guides I've read/watched said that temps were most likely going to be the limiting factor as far as OC'ing the 6600K so I was kinda expecting that. I'm glad it isn't a factor and I'm glad that the Noctua is showing how good it is; I tend to prefer overkill which is why I was content to run stock clocks even with the Noctua installed.

Nice to know that safeguards are now in place to prevent burnt electronics shy of doing something really stupid but again, I'm so new that I probably won't be able to identify what stupid is until the damage is done. I used to build PCs in a PC shop when I was a teenager and the PC components then weren't as idiot-proof as they are now, so it's nice to know that idiot-proofing has made it's way inside the actual hardware themselves as well.

Unfortunately, after running some stock vs. OC tests in my flight-sim-of-choice, it seems like there was absolutely zero effect in terms of framerate gain :mad:
 
Your flight simulator may not benefit from an overclocked CPU if it is a GPU instead of CPU intensive application. A game like GTA 5 that is CPU intensive probably would. If you really want to see the difference, use something like Cinebench which only uses the CPU to render graphically intense images.
 
It's an old flight sim (circa 1999) and it's been stated over and over that it's CPU intensive. There's a bigger gap in performance in average framerates when just using one monitor for game display but since I play with three monitors, the performance gap in that setup is non-existent.
 
So it sounds like with three monitors the limiting factor may be the GPU.
 
I guess so, which is quite surprising. I was really hoping the game could use the extra MHz but it seems like it's fine at 3.9GHz. I gain an average of 5fps with a single game display setup over a 3-display setup, but min (+0.5fps) and max (+2.6fps) numbers aren't really too far off with 1% and 0.1% lows at just 1fps so I'd gladly trade those in for a wider FOV.

I wonder if GTX 1070s in SLI would help? I've never done SLI before plus with this being an older title.... but just wondering if SLI would help alleviate the GPU bottleneck? Or is it just better to buy the next-gen GPU?

EDIT: also finished a 4-hour 8GB RAM Realbench test with no issues and 57-67C max temps so I guess that's all good there. Might run an 8-hour one tonight just to check.
 
Last edited:
I guess so, which is quite surprising. I was really hoping the game could use the extra MHz but it seems like it's fine at 3.9GHz. I gain an average of 5fps with a single game display setup over a 3-display setup, but min (+0.5fps) and max (+2.6fps) numbers aren't really too far off with 1% and 0.1% lows at just 1fps so I'd gladly trade those in for a wider FOV.

I wonder if GTX 1070s in SLI would help? I've never done SLI before plus with this being an older title.... but just wondering if SLI would help alleviate the GPU bottleneck? Or is it just better to buy the next-gen GPU?

My $.02 again.
The 1070 is still a formidable card for what it is. I highly doubt a circa 1999 game would bottleneck it, and I also highly doubt that SLI would be of any help as would a newer gen vid card. My opinion is that you are fighting old 1999 code that hasn't and probably will never be, updated to use hardware and drivers of this era to their full potential.
 
My $.02 again.
The 1070 is still a formidable card for what it is. I highly doubt a circa 1999 game would bottleneck it, and I also highly doubt that SLI would be of any help as would a newer gen vid card. My opinion is that you are fighting old 1999 code that hasn't and probably will never be, updated to use hardware and drivers of this era to their full potential.
Agreed. Updated or not. I also highly doubt you'll see scaling for 2 cards with such an old engine. Is the game on the sli support list? If not I surely wouldnt bother.

But the high res is more GPU bound.

Hes got like 2 (had 3 blew it out of the water :p) threads that have crossed over into the discussion about the game. I'd like to focus this thread on overclocking his CPU which looks like it was done and already ran its course. :)

PS - the previous 6600k overclocking thread - https://www.overclockers.com/forums/showthread.php/767292-Need-help-OCing-6600K
 
I too wondered about the old engine coding of the game actually being the bottleneck. Just has a fps ceiling I'm betting.
 
Just to be clear, the core code is 1999, which I'm guessing is the code for the dynamic campaign, aircraft, and a few other things. Most of sim has been heavily modified though, such as addition of head tracking support, better graphics, etc. Basically, if it's not hard-coded into the game, the modders have found a way to improve on it. For example, the old game was limited to 1600x1200 resolution whereas the modded BMS can display 4K. However, the 2D UI is hard-coded to 1024x768 so even the 4K users will either have to deal with a very grainy UI blown up to 4K or get it to display in windowed mode as a tiny 1024x786 window in a 4K screen. So while the 3D game world can scale to modern display resolutions due to mods, the 2D UI is locked at 1024x768 because it is hard coded to that resolution. The guts are still 1999-based but a good portion of the game isn't, including eye candy. Big difference between the two:

kjCfi0M.jpg



However, I would need to take a look at GPU utilization during the test. If GPU is the bottleneck, then wouldn't the GPU be maxxed out or close to 100% usage at this point? If the game has an fps ceiling, would it matter if my res is 1920x1080 or 5896x1080? I mean if the ceiling was, say 100fps, shouldn't both the 1920res and 5896res be able to max out at 100fps if it was possible? But what I'm seeing now, my 1920res is about 5fps average better.

Wow, that's pretty old! Yeah, I did OC at that point but IIRC, the PC crashed soon after although unrelated to the OC, but the crash reset my BIOS settings back to default and I just used the optimized default BIOS settings after that. So I've been running on stock clocks ever since. Apologies for the crossovers; if you would tell me where to continue this conversation, I'd be happy to limit replies there. Both issues (OC'ing and buying a new CPU) are all for improving my gaming though, which primarily means Falcon BMS, so that's why this topic leaches onto both threads :)
 
GPU use should ways be at 100%, really.

If your GPU can produce the frame ceiling... sure. Is there one in that game (dont know..look it up and see.

If you are seeing 5 fps improvement (with the flawed testing methods) then clearly you havent hit a ceiling. What fps are you getting?
 
LOL, we're talking about testing methods on two threads now :D I think I'll refrain from answering until you advise me which thread you think is best to continue this discussion, otherwise we'll be bouncing back and forth between two threads!
 
I ran an 8 hour, 8GB RAM RealBench test last night and this morning I find my PC idle with nothing on the desktop. I assume RealBench crashed but I was expecting to see a crash resport, not have the PC rebooted? Or maybe it BSOD'd and rebooted?
 
Back