• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

[Anandtech] Intel 11700K Rocket Lake Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
RE: Offsets, there will be different options available in Z590 BIOSs (not sure if they will trickle down to Z490...I'd assume so).

You can do that, set a hard cap... just make PL1/PL2 156W. The CPU will throttle (power limit) on most multi-core/thread AVX2 and up loads I would imagine and lower clocks to fit within the envelope. I'd imagine the 10900K would also follow this behavior.

I've heard of one Z590 motherboard that runs the CPU at Intel specs (turbo duration/wattage). When it runs a stress test it settles to 4 Ghz and just power limit throttles for the duration. But that said, these are early BIOSs which I'd imagine most would have updates between now and release date/shortly after.
 
Even though I have a CPU capable of AVX, I wouldn't even know what it does or how to explain it. I think for most users, its probably irrelevant. If I understand it, maybe it helps with my network?

As ED has said, this idea of adding more and more cores, but the inability to actually use them is problematic. The only thing I use my CPU which could actually utilize more than 4 cores is probably Microsoft Excel and Solidworks. Other than that, the only reason for having a multicore processor is I can throw about 50 open programs at my CPU without it getting sluggish.

I think that optimizations and instruction sets are what will make CPUs faster.
 
I just did some testing with CB R15, CB R20, and Prime95 on my 8086k with 95W TDP.

CB R15 doesn't even reach the CPU's TDP so I'll ignore it to keep things simple.

R20 reached 100W unlimited. Compared to the result when set to 95W power limit, it was 1.6% faster for 4.5% more power consumption. Clock when unlimited was at the all core turbo of 4.3. With power limit applied it mostly stayed there but occasionally dropped below it.

Prime95 I ran 6 tasks of 128k FFT, one per core. 12 might or might not be more power hungry, but in the real world users wouldn't use HT as all it does is increase power consumption and not give tangible throughput increase. This actually hit 120W unlimited. Compared to 95W power limit, that was 7.7% faster for 21.2% more power consumption. Again when unlimited it ran at 4.3 GHz, dropping to mostly 4.0 at 95W, occasionally flickering to 3.9.

Anyway, I still think an easy exit to Intel's perceived power problem is to use an enforced power limit like AMD. And also like AMD, that power limit is not TDP, in AMD's case the PPT is about 35% higher than TDP. I'd suspect lower mobos will have to do similar anyway. Can't see budget boards designed to push best part of 300W through the CPU.

- - - Auto-Merged Double Post - - -

Even though I have a CPU capable of AVX, I wouldn't even know what it does or how to explain it. I think for most users, its probably irrelevant. If I understand it, maybe it helps with my network?

AVX was introduced by Intel with Sandy Bridge and it can be seen as an evolution of other SIMD instructions like SSE. It got a major uplift to AVX2 with Haswell, introducing FMA (fused multiply-adds) which means instead of doing a separate multiply and add operation you can do it in one. AVX-512 takes that even further, introduced with Skylake-X and related server CPUs, but hasn't made it to mainstream desktop until Rocket Lake.

What it does, is it helps you do the same work to a lot of data. I guess you could even call it making a CPU a bit more GPU-like. For code that scales in that way, it helps speeds things up a lot compared to not having it. And like GPUs, that is not all workloads.
 
I also wonder how well would an Intel system work with a power limit set. For example, set a hard cap at 156W. Would it behave more like a Ryzen system? In that a heavy load would lower clocks to keep in the same power budget? Don't need to deal with instruction set offsets. In case you're wondering, that's taking the TDP (PL1) value of 125W and multiplying it by 1.25 to give the PL2 value. So that's an implied/suggested power limit from Intel absent anything better. I could try this on my main system...

Yes, it lowers the clock/voltage to fit the power limit. So when the application additionally uses AVX then it will run even lower (unless you force it manually at 0 offset). However, it reacts a bit differently than on AMD and adjusts in more "steps".
You can check it on Comet Lake as I guess you still have it. It's harder to see at 150W+ but when you set, let's say 70W power limit then i5 10600/K will run at ~4.6GHz and maybe ~4.3GHz with AVX max. 10900K will run a bit higher but will stop at about 4.7GHz.

I'm still waiting for the CPU as there is a delay with shipping but the Z590 mobo is waiting.

RE: Offsets, there will be different options available in Z590 BIOSs (not sure if they will trickle down to Z490...I'd assume so).

You can do that, set a hard cap... just make PL1/PL2 156W. The CPU will throttle (power limit) on most multi-core/thread AVX2 and up loads I would imagine and lower clocks to fit within the envelope. I'd imagine the 10900K would also follow this behavior.

When I tested i5 10500 and i9 10900K on lowered voltages and power limits then both were acting the same way. The only difference was that the 10900K CPU was boosting higher and I assume only because it required lower voltage for the same clock.
 
Last edited:
You can check it on Comet Lake as I guess you still have it. It's harder to see at 150W+ but when you set, let's say 70W power limit then i5 10600/K will run at ~4.6GHz and maybe ~4.3GHz with AVX max. 10900K will run a bit higher but will stop at about 4.7GHz.

As my previous post I tried it on my "95W" 8086k. A lighter load like Cinebench R20 almost reached the same clock as unlimited power, but a heavy load like Prime95 small FFT was running around base clock. To my eyes, this is pretty much the same as AMD's implementation, apart from Ryzen has 25 MHz steps and Intel is still on 100 MHz steps for adjustment. It would be interesting I suppose, if I apply 95W limit to the 10600k, would it behave like the 8086k? Logically it should, unless there are differences in binning between the two models, or they made process updates between the two given two generations between them.

The biggest problem I see with Intel following AMD in using an enforced power limit is that it'll be seen as taking away performance. I think enthusiasts have been used to running unlimited for too long, even if leaving it wholly unconstrained by default may no longer be the most sensible option. And I do think this is an enthusiast problem. The big box shifters like Dell and HP will apply a power limit to their systems.
 
As my previous post I tried it on my "95W" 8086k. A lighter load like Cinebench R20 almost reached the same clock as unlimited power, but a heavy load like Prime95 small FFT was running around base clock. To my eyes, this is pretty much the same as AMD's implementation, apart from Ryzen has 25 MHz steps and Intel is still on 100 MHz steps for adjustment. It would be interesting I suppose, if I apply 95W limit to the 10600k, would it behave like the 8086k? Logically it should, unless there are differences in binning between the two models, or they made process updates between the two given two generations between them.

The biggest problem I see with Intel following AMD in using an enforced power limit is that it'll be seen as taking away performance. I think enthusiasts have been used to running unlimited for too long, even if leaving it wholly unconstrained by default may no longer be the most sensible option. And I do think this is an enthusiast problem. The big box shifters like Dell and HP will apply a power limit to their systems.

If I'm right, you will see a bit different behavior on the 10600k as it has improved power management. Not much but still. There are slightly different voltage and temp ranges.

On AMD I see that in AVX tests, the clock instantly goes down, the same as on Intel. However, on Intel additional factors are not causing to drop the clock so fast as on AMD. For example On 10900K when I run AIDA64 then at first I see a 200/300MHz lower clock because of the AVX and next, in time when the CPU heats up, I see a lower clock but not much lower voltage. On 5900X, it somehow instantly drops to ~4.2GHz (so much lower than its boost clock), and then, when the CPU heats up or demands more power (and is still limited not so high) then goes down to ~4.0-4.05GHz. Here is the main difference in voltage/clock management. Intel will keep about ~1.20-1.25V and as long as the temperature is fine then will keep ~4.6-4.7GHz. AMD will go down to ~1.1V and regardless of the temperature, will keep 4.05-4.2GHz when it reaches the power limit and demands more. Also, the throttling and "high temp" points are different. Intel will go up to ~95°C and then start to lower the clock, while AMD starts to lower the CPU clock at about ~80°C and already balance power and performance before this temp point.

I see that my 5900X tries to balance voltages and clock trying to keep no more than ~80°C. Some other users reported that for them it's ~90°C but they were using 5600X/5800X while I have 5900X. So here is one more difference. More cores = lower temp limit and the CPU faster lowers the clock.

It's hard to compare AMD to Intel. Too many variables affect the final performance. For me, Intel seems more predictable when I use manual settings.
People say that Intel heats up so much but AMD isn't much better, just more limited. AMD drops the clock more to keep it within power and temp limits. When you unlock power limits on Ryzen then it will throttle on most popular coolers so it's not much different than unlocked Intel.
There is also a motherboard factor as not many popular brands use reference Intel settings. Those who do will cause the CPU to lower the CPU clock some more. Most others will overvoltage or run at a higher power limit causing the CPU to heat up more.
 
I've not looked as closely but I do get the feeling that AMD do thermally clock down much sooner than Intel. But now thinking about it some more, I wonder if that apparently early thermal throttle is indirect, as I saw hints of something happening on Intel once I applied the power limit. My testing earlier was relatively short term, but there were hints of the Prime95 test occasionally dropping another 100 MHz as I left it running when I got distracted doing something else. It is a known fact that as silicon gets hotter, it gets less efficient. That is, the same applied voltage/clock configuration will tend to use more power as the temperature increases, and if not contained can result in thermal runaway. Use more power, get hotter. Get hotter, use more power. Repeat. If you're running on a power limit, that implies the only thing you can do is reduce the performance as it heats up. Since most enthusiast Intel users do not run on a power limit, that effect isn't really seen. But Zen (2+) by default does run to a limit, making it visible.

Anyway, the point remains, I think Intel need to consider adding an enforced default power cap. Otherwise less informed people will think that the 11700k is a 290W CPU when the reality is vast majority of users will never get close to that.
 
I was wondering what that gear stuff was when I saw the leaked slide earlier. Is there such a setting on existing Intel systems? I don't think I've heard of anything like that. If I recall correctly, on Zen 2 it was implemented to allow faster ram support, at the possible cost of latency, and the claim here is we will have something similar on Intel.

I'd raise one question on the warranty side. While often only the supported speed is mentioned, I have to wonder if that applies to other settings also. I think Anandtech's policy is to test with JEDEC timings where possible, so that's usually much slower than enthusiast (XMP) ram of the same speed. That is the official ram spec. If you go faster than official ram spec, then it is a form of overclocking, even if in practice you're reducing timings as opposed to increasing clocks.

On the previously linked writer's opinion it is Intel putting it as product differentiation, is there a possible real technical reason for doing so? I know, for the longest time on Skylake microarchitecture CPUs we've been running XMP 3200 and beyond. But that it can usually be done doesn't mean it always can be done. As enthusiasts we take "it appears to work in the one system I tried it on". Intel's standard may be more like "it has to always work under any specified operating conditions with any standard compliant components". Older people may remember the BX chipset, and it was popular overclocking just to bump the bus from 100 MHz to 133 MHz. It even got a nickname of BX133. I recall enthusiasts complained that later platforms didn't perform as good. I recall Intel's response was along the lines that BX133 runs timings outside specifications and newer chipsets supporting that clock officially have to comply with those standards. Wonder if we have a parallel situation here. The backport may have added complications from the original design, and we can't assume it is the same as the Skylake family ones. The memory controller may have to be binned to ensure it meets requirements of standards. Or it could just be product differentiation after all.

Anyway, that's another set of benchmarks to look forward to. I'm sure someone will do a deep dive on it. If it gets sufficient publicity, I wonder if those with retail samples on hand might visit it before final release.
 
This is a Z590/Rocket Lake thing.

I've seen boards that will run 1:1 to 3600...also XMP DDR4 4000 was 1:2. Im imagining that things will be different by board, but not sure at this moment.

3200 MHz is the maximum rated speed for the memory controller. Recently (year+) there have been new sticks out that have the 3200 MHz JEDEC spec so they literally boot to 3200 MHz. But like all JEDEC specs, the timings are for compatibility and they run like 3200 CL22-xx-xx-xx. I don't believe timings are part of the Intel specification... but not sure (I simply haven't seen it). Is XMP 3200 @ CL14 overclocking?
 
Last edited:
I don't want to give them the click and I stopped watching them long ago for essentially bad attitude. Is there a tldw on how they came to that conclusion?

BTW in another thread I had mentioned I ordered one, well, that isn't happening. I think the store mistakenly made the sales go live early but stopped them before shipping. I still intend to pick one up at launch.
 
I don't want to give them the click and I stopped watching them long ago for essentially bad attitude. Is there a tldw on how they came to that conclusion?

BTW in another thread I had mentioned I ordered one, well, that isn't happening. I think the store mistakenly made the sales go live early but stopped them before shipping. I still intend to pick one up at launch.
TL;DW:

- slower in some games than 10700K

- about on par with 5800X in games (lose/win some)

- 5800x meanigfully better in productivity benchmarks

- did not easily overlock in meaningful manner (OC video coming later)

- at the current retail price not worth recommending

= pretty much waste of silicon (at current price)

Taken from a comment on reddit

 
I never liked the wider community's over-focus on price especially as it can and does vary making point in time comparisons bad. Personally I'd like to look at understanding performance and any other platform features before even considering value if the product is a viable purchase at all.

While I had tried to order the 11700k, the one I actually want is the 11700F. Costs 25% less and I'd argue for practical purposes it behaves the same. Tiny % variations don't really matter to me and I'm looking for something that'll make a more tangible difference. AMD still don't have lower end desktop Zen 3 CPUs yet.

IMO using the phrase "waste of sand" is total nonsense. What else should Intel do with their sand? Make more Comet Lake? Or make nothing at all? Desktop 10nm is on the plan, just not for right now and unrealistic whinging from some is not going to change that. Fanboy pandering gets rather tiresome.

And yes I know what forum I'm on, but I'd consider daily driver OC to be pretty much dead. About the only reasons I can see for doing OC is for competitive reasons or performance testing. Both Intel and AMD CPUs are already running in a very inefficient part of their curves and I don't feel any need to make it even worse.
 
Any review that lists a price is a PIT comparison. Difficult not to do that, honestly. I'm not sure of any reviews that leave price out on an island before performance and platform features are considered. The result is a culmination of all those things.

11700K vs F is iGPU vs not.. otherwise, that's it AFAIK. There may be 100 MHz turbo difference... the ARK doesn't have these CPUs yet.

I kind of agree with the waste of sand, but do understand that they need to put out 'something' before 10nm and truly 'new' parts arrive. Nobody knows the details behind the process, but when I look at Comet Lake and 10c/20t flagship you wonder why the next 'newest' chip came out with 8c/16t max (although working with AVX-512, I can see why.... lol). The 'whining', IMO, is spot on... and I didn't even mention price. ;)

... and this is coming from someone who is really turned off by the 'core wars' AMD started. I wouldn't mind if Intel let Comet Lake ride until its 10nm comes out. I don't think Zen 4 will beat it to the punch so............ why?
 
https://ark.intel.com/content/www/u...1700f-processor-16m-cache-up-to-4-90-ghz.html
https://ark.intel.com/content/www/u...1700k-processor-16m-cache-up-to-5-00-ghz.html

Been on ark for a while.

Note I'm comparing the F not the KF against the K, and assuming the iGPU is not needed. Differences in clock (assuming mobo runs unlimited power anyway) will be low single digit % and largely irrelevant. If I had to guess differences going from i7 to i9 will be similarly small.

I'd give it that AMD has disrupted the market perception and made some users think they need more cores than they really do. For a high end general purpose desktop 8 cores is still going to be great for most things, and even 6 isn't that much different. The small number of people who do heavy tasks and depend on productivity can of course get more cores as genuinely appropriate. For example, I do make videos as a hobby, not a job. Even if videos take half the time to encode, it does not significantly affect my workflow where I am the biggest bottleneck. Now if you're doing it every day as your main job, I can see that matter more.
 
Huh... must have missed it. i7 to i9 won't matter much, in ST work... multi quite obviously it would since they have more cores and threads.

Yeah, that is my concern with AMD's move to more cores/threads. Surely they are improving in IPC and single threaded performance, but, to see 16c/32t in mainstream......they can pound sand (trying to fit that in considering the sand comment above, haha). 80% of people look at the box and think 'moar cores' it has to be faster... when the reality is for most users it isn't.
 
Back