• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

[Anandtech] Intel 11700K Rocket Lake Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Hmmm, interesting. Now we got a fight with AMD's flagship Ryzen 7 59XX and this. Let the price battle commence :clap: All this money I have burning holes in my hand and pockets. You think AMD will hold it down?
 
Guess I won't be swapping my 9900KS any time soon. What a power hungry pig.

I wish they did some benchmarks on the new Xe integrated graphics, though. Maybe they'll come in time. I want to see it compared against the AMD APUs and low end discrete graphics.
 
I have to say, that is mighty impressive. It seems like AMD is being nice to the polar bears where Intel is just sick of their **** and turned up the heat.
 
I have to say, that is mighty impressive. It seems like AMD is being nice to the polar bears where Intel is just sick of their **** and turned up the heat.
You should try that on Ryzen... oh wait... they don't have avx-512... :p

On a more serious note, as was said above, not much uses this instruction set. But if you have work/app that can, it puts out some serious heat.
 
I didn't mean that in a fan boy type of way.. I just thought it was impressive you could be running a program and the CPU is using say 150w.. program finishes launch another and go from 150 to 300w with just an instruction set change.

I understand Ryzen cant run it.
 
Considering I picked up my 10980xe for $520 I won't feel too bad. That and finding a good price on a 5950 was impossible.

Good to see Intel is being competitive with the 5800x and 5600x. If the 11900K is the highest tier of Rocket Lake, then I guess I am safe for at least another 2 Generations? or until AMD or Intel come out with something mind blowing.
 
I didn't mean that in a fan boy type of way.. I just thought it was impressive you could be running a program and the CPU is using say 150w.. program finishes launch another and go from 150 to 300w with just an instruction set change.

I understand Ryzen cant run it.
No no.. I know.. was just having fun. :)

It's a big difference, but it is manageable with an offset. Personally, I think they should have left this on HEDT stuff...
 
Show of hands -> who's running AVX512 all day every day?

*raises hand*

Been looking forward to this entering mainstream desktops for a while, given it has been in mobile for a while. At the moment I only have one Skylake-X system with it and it blows away every other CPU I ever owned. I'd consider getting more if mobos were cheaper. Used CPUs in that area are pretty cheap now.


It's a big difference, but it is manageable with an offset. Personally, I think they should have left this on HEDT stuff...

All Intel has to do to fix this perception problem is to copy AMD by enforcing a warranty voiding power limit. As long as Intel allow system builders to set unlimited power, they're going to do it for performance. Unless you're Dell or HP and don't bother putting any cooling anyway. That's about the only scenario I see a lower power limit set on Intel desktop. On Ryzen you see notable clock drops when running heavy AVX2 code, so they're not immune to this. Only they choose to cope with it differently.

Besides, it is going in mainstream, and not too soon. Rumours on Zen 4 suggest AMD might finally be catching up in this area.
 
I won't hold my breath for 512 to take over anytime soon. If AMD adding 512 is anything like ray tracing on GPUs, nothing is moving forward fast. Another generation in HEDT+ wouldn't have hurt anyone. ;)

Few things use it. Not surprised, mack, you do! :thup:
 
I won't hold my breath for 512 to take over anytime soon. If AMD adding 512 is anything like ray tracing on GPUs, nothing is moving forward fast. Another generation in HEDT+ wouldn't have hurt anyone. ;)

Few things use it. Not surprised, mack, you do! :thup:

That it is not in so widespread use is why it needs to be in more CPUs, so software devs have more incentive to code for it. I'm not a programmer by any means, but it seems to be a logical expansion of AVX2, so any code already using AVX may potentially see a significant performance uplift from using it.

My concern with AMD is they might do another bad implementation like AVX2 on Zen/Zen+. While they supported the instructions, the performance was not backed by hardware execution so was only about half that of an Intel core, putting its actual performance around Sandy Bridge era. They didn't address that hardware shortfall until Zen 2. I have a feeling that AMD's strategy is more focused on more cores than more features per core.
 
I have a feeling that AMD's strategy is more focused on more cores than more features per core.
..something I'm not personally a fan of. The core wars for the mainstream is, to me, a pissing contest. One that doesn't have much of an impact to most users. If we consider that in late 2020 consoles came out with 8c/16t parts, I'd imagine in a few years THAT will be the baseline. But......AMD has 16/32t chips out on the mainstream platform. I honestly feel that is taking advantage of consumers who don't know that for MOST PC uses, 8c/16t is plenty today and will be for the next few years. If software devs have trouble coding for more cores that have been here for almost a decade, why would this move things forward (consoles don't have it, AMD doesn't have it so far)?

That said, I have no idea why Intel when backwards and now 'only' offers 8c/16t as a flagship on mainstream. I like the better separation of church and state (read HEDT and mainstream), but that move was curious considering they had a 10c/20t SKU out already.

I mean, you can't even use AVX-512 without a massive offset on these CPUs...I can't say anything else, but, from what you've seen from AT, it's not even possible to use it during a stress test.
 
That said, I have no idea why Intel when backwards and now 'only' offers 8c/16t as a flagship on mainstream. I like the better separation of church and state (read HEDT and mainstream), but that move was curious considering they had a 10c/20t SKU out already.
Take a design made for 10nm, and build it on 14nm. It's going to be big.

I mean, you can't even use AVX-512 without a massive offset on these CPUs...I can't say anything else, but, from what you've seen from AT, it's not even possible to use it during a stress test.

AVX offsets are Intel's main way of balancing power between different workloads, although I wonder why they don't do an AMD and just apply a global power limit and clocks will adapt as necessary. I think the CPUs can do it now, but enthusiasts generally run power unlimited because they want max performance.

Don't get the last part about not being possible to use? Can you clarify.


Also in the AT there was a large improvement shown in 3DPM AVX performance, 6x relative to AVX2. My first reaction was, this is too big. Based on execution potential, I'd expect up to 2x. So I learned something new, AVX-512 also includes features to convert data types, and in that scenario we might see gains in more software, however that might also be more work to implement well.
 
I can't really clarify any details at this time...sorry. Maybe in a couple of weeks. ;) :cool:

I've heard that the ability to run this instruction set varies wildly depending on the motherboard and how it is set. If the board has a form of MCE with power limits above the Intel spec, temps hit 100C almost instantly during a stress test (even with 3x120mm AIO). You'll need to have a significant offset or a board that more closely follows Intel specifications (but the CPU runs at A LOT lower clocks, like several hundred MHz) to keep temps reasonable. Remember, the AVX boost isn't really defined/listed at ARK. Those turbo boost and TVB values are not with AVX loads.

From the article -
Our temperature graph looks quite drastic. Within a second of running AVX-512 code, we are in the high 90ºC, or in some cases, 100ºC. Our temperatures peak at 104ºC,
Now, that is 10 second load and off... imagine running a stress test with 512...you'll end up thermally throttling almost immediately as there isn't a significant break between loads.

EDIT: That is also with an 11700K, not 11900K. The difference isn't too much, but still worth noting.
 
Last edited:
Considering I picked up my 10980xe for $520 I won't feel too bad. That and finding a good price on a 5950 was impossible.

Good to see Intel is being competitive with the 5800x and 5600x. If the 11900K is the highest tier of Rocket Lake, then I guess I am safe for at least another 2 Generations? or until AMD or Intel come out with something mind blowing.

I picked up my i9-7980xe 2 months ago for $450.00 + SH so that is a great price :)

Isn't this also socket 1200, slip right in to a z490 MB?? Temp @104c with a 4.5/4.6 GHz speed, need to drop another 200+ MHz to stay cool.
IIRC the standard for the Skylake & Skylake-X (also their refreshes) was a AVX offset of 3. This would imply a AVX offset of 4/5 and still not keep from throttling :-(
 
Last edited:
I've heard that the ability to run this instruction set varies wildly depending on the motherboard and how it is set. If the board has a form of MCE with power limits above the Intel spec, temps hit 100C almost instantly during a stress test (even with 3x120mm AIO). You'll need to have a significant offset or a board that more closely follows Intel specifications (but the CPU runs at A LOT lower clocks, like several hundred MHz) to keep temps reasonable. Remember, the AVX boost isn't really defined/listed at ARK. Those turbo boost and TVB values are not with AVX loads.

From the article - Now, that is 10 second load and off... imagine running a stress test with 512...you'll end up thermally throttling almost immediately as there isn't a significant break between loads.

Ok, I missed that detail since I was focused on the results and the temp was in another section. I'll admit, I didn't read all the way through it.

I've only got one system with AVX-512 and how that mobo/bios deals with it confuses me. I think it also varies with specific CPU as it behaved differently with my old 7800X and current 7920X. Right now, with mobo default settings other than XMP on, the 7920X runs AVX-512 at base clock. That runs really cool, and with AVX-512 there's still a lot of performance even at the lower clock. In the past it had run at higher clocks, and as many observed, that can really suck power. When I still bothered with hwbot, getting records on y-cruncher 10b with AVX-512 was basically balancing my cooling. The CPU might run faster with better cooling but for a 10b run which mine lasted over 5 minutes, it appears I had used 3.7 GHz and 0.97v. The shorter 1b run at 24 seconds allowed me to run at 4.0 GHz, and the tiny 25m at under half a second I could get away with 4.5 GHz.

I have to wonder if there is an official recommendation for AVX offset values. I don't recall where I saw it, but they were even looking at adding additional classes of AVX offset to allow for different types of AVX workload, as not all are as power consuming as others.

I also wonder how well would an Intel system work with a power limit set. For example, set a hard cap at 156W. Would it behave more like a Ryzen system? In that a heavy load would lower clocks to keep in the same power budget? Don't need to deal with instruction set offsets. In case you're wondering, that's taking the TDP (PL1) value of 125W and multiplying it by 1.25 to give the PL2 value. So that's an implied/suggested power limit from Intel absent anything better. I could try this on my main system...
 
Back