Do they use the Puget benchmark or a custom one? I've so far seen a few that use the Puget benchmark and they all show roughly the same result. The performance repeatability seems to be an issue right now though, so it's hard to say exactly what the true performance is.
I didn't pay attention to it as it isn't relevant to me, and it isn't interesting enough to go back and check. That I even looked at the chart on the screen was because I saw your post at the same time.
Just watched the GN video for the 285. Reviews seem to be unremarkable as stated. Especially for the cost. With the pricing being 590-630 it makes virtually no sense in comparison to current Intel products (where the 285 falls in the middle) let alone AMD.
I just received the G.Skill press release, and RAM performance also seems somehow bad. This is the 10k CUDIMM OC result from the press release. Maybe you will see it around, as it's in official marketing stuff. The memory bandwidth is about the same as that of the barely tweaked 8200MT/s CL38. Latency is even worse.
I didn't pay attention to it as it isn't relevant to me, and it isn't interesting enough to go back and check. That I even looked at the chart on the screen was because I saw your post at the same time.
I just received the G.Skill press release, and RAM performance also seems somehow bad. This is the 10k CUDIMM OC result from the press release. Maybe you will see it around, as it's in official marketing stuff. The memory bandwidth is about the same as that of the barely tweaked 8200MT/s CL38. Latency is even worse.
This is what I'm seeing elsewhere. Latency pretty much sucks no matter what and even bandwidth doesn't benefit like you'd expect. I think Intel has some major issues in their ring/tile interconnect design that is severely hampering memory performance.
Someone mentioned that the cache has some problems, too. It brings back some HEDT issues with low cache bandwidth and high latency. I don't really have the time or even will to read others' reviews. I will spend enough time on that stuff soon myself, if I want or not.
Considering how well the 285k does in multi core performance it's a shame all of that power can't be brought to bear in every game. Reverse hyperthreading when? Oh well. Probably gonna have to go 9800x3d this time unless something unexpected happens. This kind of reminds me of when I built my first ever pc and the amd athlon had just come out and crushed intel in their complacency. Perhaps this will lead to intel snapping out of it and releasing core 2 duo 2 electric boogaloo to do some stomping in return. Competition is good.
I just received the G.Skill press release, and RAM performance also seems somehow bad. This is the 10k CUDIMM OC result from the press release. Maybe you will see it around, as it's in official marketing stuff. The memory bandwidth is about the same as that of the barely tweaked 8200MT/s CL38. Latency is even worse.
I'd love to see how this handles a real memory workload: Prime95. I find this behaves differently than the synthetic tests in Aida64. Use the built in benchmark function. Suggested settings as follows:
The trick is to pick a big FFT size so that work can't be contained on CPU. 8192k shown above means each task (worker) uses 64MB, as data size is FFT size * 8. Usually if you set workers = cores with large FFT that will guarantee it is running ram limited. Note used FFT sizes are not the same for all CPU architectures. If you run 8192 and it does nothing, it means that isn't optimal for that CPU. Try setting min 8000 max 8200 and see if it finds another value in that area. Results wont be directly comparable to other different FFT sizes, but still fine for general indication especially for results on same system.
Cores to benchmark generally leave it on number of real cores. I have no idea how this behaves with hybrid cores. It might be interesting to play with E cores disabled. HT can be disabled where available since it doubles test time and doesn't add anything useful. Not a problem for ARL!
Number of workers tries to split the available cores to integer divisions, each doing separate work. Since we're hammering the ram interface, as long as the maximum core is in the list that is fine.
Time to run can be reduced to the minimum of 5 seconds to make it run faster.
Example output for my 7800X3D:
Timings for 8192K FFT length (8 cores, 1 worker): 2.32 ms. Throughput: 431.61 iter/sec.
Timings for 8192K FFT length (8 cores, 2 workers): 4.88, 4.74 ms. Throughput: 415.63 iter/sec.
Timings for 8192K FFT length (8 cores, 8 workers): 33.52, 33.59, 33.02, 32.99, 33.06, 32.74, 32.67, 33.02 ms. Throughput: 241.89 iter/sec.
Higher "throughput" is better. In this example, 1 worker only needs 64MB so that easily fits inside L3 cache. Note in general, scaling efficiency isn't ideal and more cores per task does leads to a reduction in efficiency. With 2 workers the throughput drops a bit. We don't quite have sufficient CPU cache, but it seems DDR5 6000 is doing a good job keeping it fed regardless. With 8 workers, the CPU cache has no chance and we're definitely in ram limiting territory. That value is the one to observe with different ram configurations.
If I find the time, I will try. Right now, I have to finish the last X870E mobo review. I actually have the 265k right now, but it has to wait at least until tomorrow.
If I'm right, 8800+ RAM works at Gear 4, so it has an additional slowdown that no one mentions. I still have to check it.
It'll be interesting if the higher speeds can give an increase in throughput. It should be noted that, to my understanding, Prime95 memory access pattern is a mix of reads and writes so that might be in part why synthetic benchmarks don't match with it, as they usually do one thing. In my observations, if all else is equal: 2R > 1R, faster is better, primary timings have a small impact compared to previous factors.
Thinking back to an example, remember those Kingston 4000 modules some years back? Not 100% sure think they might have been B-die. They had two XMP profiles, 3600 and 4000. In every test I ran on them, the 3600 profile gave better results. This was on both Intel and AMD systems. Guess the timings got too bad to reach 4000. Someone on another forum kept going on about a certain ram timing affecting Prime95 performance but I was never able to replicate that claim. Still it is likely something in secondary or tertiary timings could influence it.
If I had a couple million lying around to do that, I'd consider it! For those that don't follow it, the discoverer believes they spent somewhere shy of 2 million on cloud services to find that prime. I think that was spread over a year but I might be misremembering on that detail.
I'm looking way down in sizes. Software load behaves like Prime95 so that is a good testing parallel. There's a challenge running elsewhere right now where units on 6+ modern cores are under 2 hours each. A find in this project would be a step towards proving a mathematical conjecture.
Thinking back to an example, remember those Kingston 4000 modules some years back? Not 100% sure think they might have been B-die. They had two XMP profiles, 3600 and 4000. In every test I ran on them, the 3600 profile gave better results. This was on both Intel and AMD systems. Guess the timings got too bad to reach 4000. Someone on another forum kept going on about a certain ram timing affecting Prime95 performance but I was never able to replicate that claim. Still it is likely something in secondary or tertiary timings could influence it.
There is always a problem with IMC ratios. This is why both AMD and Intel perform better at a worse ratio - but only at a high enough frequency and low enough timings. With some CPUs, it's not possible to get better results at a 1:2 ratio than at 1:1. For example, the average Ryzen 3000/5000 couldn't run at 4800+ and low timings. 4000/5000 APUs could, but who invested in 5066+ DDR4 for APUs?
Now I wonder how high it is required to set RAM at Gear 4 to match Gear 2. So far, I have had the time on quick synthetic tests and 8000 CL36/XMP profile has barely worse bandwidth than 10000 OC from G.Skill press release, but I have 77ns, and they had 84ns latency.
ASRock Z890 mobos have a CPU Indicator tab, and it says that my 265K CPU gets an 83 score when the average is 81, the top 1% is 89, and the top 10% is 86. It's nothing special, but it still seems to be above average, so it's better than all my 14th-gen CPUs, which were much below the average. I don't fully believe in these numbers, and I'm not thinking about overclocking for 24/7 use, so I don't care much.
What is interesting is that the ASRock mobo now shows the PMIC type, whether it's unlocked, and the actual type/PN. Also, there are more details about RAM, its rank, stepping, and some more. There is also something like Gear4 Timing Turbo Mode (enabled/disabled). I wonder what it does.
There is always a problem with IMC ratios. This is why both AMD and Intel perform better at a worse ratio - but only at a high enough frequency and low enough timings. With some CPUs, it's not possible to get better results at a 1:2 ratio than at 1:1. For example, the average Ryzen 3000/5000 couldn't run at 4800+ and low timings. 4000/5000 APUs could, but who invested in 5066+ DDR4 for APUs?
I mentioned both Intel and AMD since AMD had the ratios much earlier. I think this was back in Coffee Lake era before Intel added gears with Comet Lake, or am I misremembering?
ASRock Z890 mobos have a CPU Indicator tab, and it says that my 265K CPU gets an 83 score when the average is 81, the top 1% is 89, and the top 10% is 86.
Where does it get it from? What does it mean even? I think there was something similar in the distant past but I don't recall if it was CPU or GPU, with claims that low/high scores meant better or worse OC potential.
Where does it get it from? What does it mean even? I think there was something similar in the distant past but I don't recall if it was CPU or GPU, with claims that low/high scores meant better or worse OC potential.
Since the 12th gen, you could see it in the BIOS of higher motherboards. I'm not sure if in lower, too, as I remember some didn't have it. The scale for 13/14th gen CPUs was from about 90-130 or something close. I had CPUs with a score of 95-100. I tested 15+ CPUs, I don't remember exactly how many. There was also a score for IMC, but as confirmed, it did not say anything about the IMC quality.
Now I see that the scale for the new generation is up to 90, and I have no idea if it tells how good the CPU is.
ASRock Z890 mobos have a CPU Indicator tab, and it says that my 265K CPU gets an 83 score when the average is 81, the top 1% is 89, and the top 10% is 86. It's nothing special, but it still seems to be above average, so it's better than all my 14th-gen CPUs, which were much below the average. I don't fully believe in these numbers, and I'm not thinking about overclocking for 24/7 use, so I don't care much.
Trying to search for it is proving difficult since the words used are also used in other contexts. All I've seen are:
1, some people claiming it is related to the VID in indicating its overclocking potential
2, some people seeing a lower number so they think their CPU is bad in some way
What I said is how ASRock described it in the BIOS. I just took it from there (average, top 10%, top 1%). I have no idea how many CPUs they tested, how someone designed this scale, or how they performed internal testing.
I will check it on GB Z890 on Monday, as I left it in the office. However, regardless of the mobo brand, the last gen had the same scale (just up to 130 and not 90 like now).
"Intel 285K Delidding Fully Escalated - Arrow Lake Direct Die"
1:25 Challenges for Delidding 2:56 Little room to maneuver & First delidded CPU 4:12 Intel warning & The Heater 7:12 Delidding the first CPU (ES) 10:28 The delidded CPU & Cleaning 11:47 One month later... 12:52 Temperatures before delidding 13:43 The almost final delidder 15:43 Delidding the first retail CPU 17:57 The “test system” & Temps after delidding 20:22 Another month later... 21:02 Important note about Direct-Die 22:23 The chip height 24:51 Intel 1851 Mycro Direct-Die PRO RGB V1 assembly 26:12 Cinebench & Power consumption with DD 28:51 About Delidding the CPUs & Alternatives to the Heater 31:55 The final Heater 35:49 Summary/Conclusion
Slow steps because of lack of time, so a quick RAM comparison in AIDA64 -> read/write/copy/latency:
- 6400 CL52 Gear 2 95GB/s / 85GB/s / 87GB/s / 95ns <- this is JEDEC specs for CUDIMM
- 6400 CL36 Gear 2 100GB/s / 87GB/s / 90GB/s / 88ns
- 7600 CL38 Gear 2 115GB/s / 99GB/s / 100GB/s / 80ns
- 8800 CL42 Gear 4 124GB/s / 97GB/s / 105GB/s / 88ns
- 8800 CL42 Gear 2 126GB/s / 100GB/s / 110GB/s / 77ns
- 9200 CL42 Gear 4 126GB/s / 100GB/s / 115GB/s / 88ns
Gear 2 on my rig works up to 8800. I have to do more tests, but 8800-9000 at G2 can be a max-worth setting.
Cyberpunk 2077 1440p/ultra details +DLSS +RT is between 106 and 112 FPS, going from 6400 JEDEC to 8800.
I don't know why people are complaining. These CPUs may not be amazing, but they are much more power-efficient than the last gen and still perform well (could be better in gaming). My 265K at auto runs at ~80C max and 180W max in Cine23/24 or other demanding tests on a 280 AIO cooler set in BIOS to a silent mode. 14700K, not to mention 14900K, was 250W+ and 95C at the edge of throttling in almost all multithreaded tests.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.