• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Can a Ryzen 2600-2700(X) owner run some benches please?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Just done a run on 8086k, and while the overall result wasn't surprising, the sub results are... more on this later. I just dusted off my 1600 system, and letting Win10 catch up on updates before I can run on that.

Oh, the 8086k at stock (4.3 GHz all core turbo), in CB15 I got 1442 with HT, and 1086 without, for a 32% increase. Best of 3 runs for each. In the Ryzen review that had the 8700k at 1303 at 4.0 GHz, which would be 1401 scaled to 4.3 to match mine, so that's about the same. For the smaller 22% difference, I would have to score 1148 to 1182 without HT. So, either I'm doing something wrong, or there is something else going on there. On that note, I'll re-run it some more to double check.
 
Unless anyone else beats me to it, give me about a week or two and I'll post back results. Right now I'm looking around for parts for a ryzen 2700 and core i7 8700k setups.
 
3dpm2.png

3DPM has subscores, and I thought it interesting to show those as well as the overall score. It might not be clear, but in general the 8086k and 7800X are slightly faster per core per clock with HT off, compared to the 1600. This reverses when HT/SMT is enabled, and the 1600 takes the lead.

3dpm1.png

It is clearer to see when shown as HT/SMT improvement. The interesting thing here is, one of the subtests shows over 60% improvement on Intel CPUs from having HT on compared to off. This breaks my assumed 50% limit! It is also clear that Ryzen shows better SMT scaling than Intel HT, almost reaching 80% on one subtest.

It was also interesting to see the two Intel CPUs, 8086k and 7800X, were almost identical. There are differences in the cache structure and ram, but presumably these tests do not rely too much on those and are primarily affected by core performance. This is to be followed up later to check for scaling.

Systems were all running Win10, with all updates applied as of 27 July 2018. Mobo bios were also all up to date with at least the original set of Sepctre/Meltdown protections if not subsequent ones. CPUs cores were stock, but the 7800X did have cache OC'd to 3000. Ram on the Intel systems were 3000 all channels, and 2666 dual channel on the Ryzen system.
 
Last edited:
Why not test with having ram all the same speed and timings out lf the gate so you can test like for like? Ram may or may not affect it, but testing should have the same speeds where possible.
 
I'm doing things kinda backwards here. My eventual intention is to test the cores, not the ram, in conditions where ram doesn't matter or has insignificant effect. As stated, this pre-testing has NOT taken that into consideration yet or proven it.

If ram is at all a limit, I need to negate its influence anyway, which would need retesting at either fewer cores and/or lower clocks. This is just to get an initial feel for how things work. It is suspected 3DPM isn't ram sensitive anyway, other than potentially through an indirect effect of Ryzen's infinity fabric. CB15 we already know is weakly influenced by ram. Competitive overclocker looking for last few points, yes. Otherwise not really significant.
 
Ok..... if that is your plan, speeds and primary timings should be the same to properly isolate the variable you are testing for anyway. A half percent here or there adds up. ;)
 
You're focusing on what will be an irrelevant detail for this testing. Equalising the ram speed will NOT help me get a better test result for what I want to achieve. If it did have an impact, which it wont, it'll bias the results towards Ryzen since it is the slowest clocked CPU of this set so far. The only potential argument would be to ensure the CPUs are working as best they can, which means getting infinity fabric speeds up on Ryzen and thus its LLC. At 2666, I'm operating the 1600 at its rated spec. Anything above that would be a bonus maybe.

Anyway, this pre-test is finding how stuff behaves and thus I'll know how to optimise it for a later run. I will as part of that intentionally slow down the ram later to investigate its impact, then pick a situation where there is no ram impact. So fiddling around getting ram settings equal would be a total waste of time in achieving that. It might matter for other testing, but I'm not doing that other testing.
 
Check the cache speed on the 8086K, I found on my board it defaulted to 4500 where the 8700K sits at 4000
 
Check the cache speed on the 8086K, I found on my board it defaulted to 4500 where the 8700K sits at 4000

Wonder if it is a mobo thing. I just had a quick look... under all core loads the cache is 4000, but hwinfo64 logs a high of 4200 at some point outside that.
 
I think it is all mobos seem to set cache differently
 
It is. Of the dozen or so z/h 3 series mainstream boards i tested, caxhe speed was different using the same cpu (8700k).
 
You're focusing on what will be an irrelevant detail for this testing. Equalising the ram speed will NOT help me get a better test result for what I want to achieve. If it did have an impact, which it wont, it'll bias the results towards Ryzen since it is the slowest clocked CPU of this set so far. The only potential argument would be to ensure the CPUs are working as best they can, which means getting infinity fabric speeds up on Ryzen and thus its LLC. At 2666, I'm operating the 1600 at its rated spec. Anything above that would be a bonus maybe.

Anyway, this pre-test is finding how stuff behaves and thus I'll know how to optimise it for a later run. I will as part of that intentionally slow down the ram later to investigate its impact, then pick a situation where there is no ram impact. So fiddling around getting ram settings equal would be a total waste of time in achieving that. It might matter for other testing, but I'm not doing that other testing.

So you begin by stating ram is irrelevant to your testing, then end it by stating that running the test will Ryzen with ram at the same 3000 MHz would possibly give it a bonus in performance...
 
So you begin by stating ram is irrelevant to your testing, then end it by stating that running the test will Ryzen with ram at the same 3000 MHz would possibly give it a bonus in performance...

I said I want to test where ram doesn't matter. If I were to test where ram does matter, then using the same ram on these 3 systems would relatively benefit Ryzen as it is the slowest clocked of the lot by a large margin.


On that note, I just did the first step to investigating that. I turned down the 1600 to 2 cores 4 threads, and re-ran 3DPM. The results were near enough 2.7x slower than 6c12t. This is a work in progress. I don't know if that is because the CPU is running into limitations outside of it, like ram, or maybe the code just doesn't scale linearly with cores. I will add some C15 numbers to it shortly, then I can make a start on 2c2t testing also.

Edit: scratch that last part, I just remembered that as I'm only running 2 cores, the turbo will be different. It is running at 3.7 with 2 cores, compared to the 3.4 with 6 cores earlier. If I allow for the clock difference, I get... 2.94x scaling, or about 2% off ideal. The question for myself is now, how close is close enough?
 
The tests aren't particularly affected by the RAM, but Ryzen benefits from faster RAM overall. At least I think that's the gist of it.

edit:And mackerel beat me to it.
 
Yup, since Ryzen first came out, it has been widely discussed that infinity fabric and things connected to it run on the same clock as ram, thus faster ram = faster other things too. So rather than increase that (I'd have to fiddle with swapping modules around from other systems) I'm lowering the compute performance instead, to take the load of whatever resources are needed. You might question that, which is fair, but I want to look at it at a generic level. How low a CPU resource do I have to go down to, as I could then scale it across other CPUs also rather than try to max out everything else. As discussed in the other thread, IMO more CPU resource needs to be balanced out by more ram bandwidth. For this testing, it seems easier to reduce the CPU load side, than to increase the ram bandwidth side, when working across many different systems.

Oh, I have a bad habit of knowing what I want to say, then not saying it but assuming everyone else knows what I'm thinking... I had wanted to keep this thread to the original request and leave it at that, but we seem to have gone around a little more... just to reiterate, the previous results posted in charts wasn't the end testing, but to give an illustration of the sort of data I'm looking for by doing this testing. By removing ram limitations as far as practical, the per core per clock results for any of those CPUs may go up, although I wouldn't want to say how much. It might be 0%, it might be more... the 6 vs 2 core testing is showing 2% less than the ideal 3x scaling, and I'm still undecided if that should be counted as significant or not. Do I need it to 0.1%? No... 1% is "good enough" I'd say, but 2% is just on the limit...
 
Just did CB15 runs... guess what, there's the same 2% difference off perfect scaling. BIOS doesn't allow reducing the core count any further (2, 4, 6 only) so next step to reduce CPU load is to lower clocks...
 
So you begin by stating ram is irrelevant to your testing, then end it by stating that running the test will Ryzen with ram at the same 3000 MHz would possibly give it a bonus in performance...
...why inferred to test it alk at once at the same time and same speed. Once a baseline is established, then make changes to isolate other things tested.

...but what do i know. :)
 
Don't you hate it when the dumb guy starts interjecting? Me, too, but I'm going to do it anyway.

I'm not sure that running the Ryzen at lower RAM speeds is "objective" at all. RAM speed is inherent to the chip's performance. It's a feature of the chip, so why is that an invalid component if you're measuring chip performance?
 
Depends on what is being tested honestly. This is why id start with the same on everything. Once a baseline is set, then variables can be changed to find whatever other datasets mack is after. But if we are just testing smt/ht efficiency, then like systems is the way to do it.
 
I can see where testing percentage of change (SMT vs. no SMT) it may be irrelevant. The numbers may change but not the percentages. Or will they? I have no idea, I'm just being a PITA.
 
Back