• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

It looks like AMD found the missing gaming performance.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Has anyone done much non-gaming testing with this update?

Looking at the Hardwareluxx writeup now (link from EarthDog's earlier post). They say (translated):
our workstation or synthetic benchmarks such as Cinebench, Blender, V-Ray, Corona, Handbrake or even 7-Zip and Geekbench do not benefit from the optimizations.

I also looked at the changes reported for the 7800X3D since I'm debating if I want to install it yet. First number I assume is average fps, 2nd is probably "lows" but I don't see exactly what they measure. I expressed it as percentage change from applying the patch.

Starfield: +3.0 +13.2
CP2077: +10.7 +61.5
F1 2024: +6.2 +9.9
Spider-Man: Miles Morales: +4.9 +3.9
Ratchet & Clank: +3.1 -3.7
BG3: +3.1 -2.5
Control: +0.8 -9.1
Anno 1800: +8.2 +6.2

Starfield was at 1080p, the rest were at 720p and still using DLSS so the render resolution would be miniscule. Gains of up to 10%, while nice if it is free, isn't going to radically change anything. And this is at an absurdly low test resolution. I play at 4k. That'll squash the differences back to zero.

Some titles did see bigger gains in the lows which could be more interesting than the average fps uplift. There were some regressions here too, but again I don't know their test methodology.
 
Last edited:
the rest were at 720p
Testing like this bothers me. 720p testing is only relevant at 720p (which 0.23% use). It's results are supposed to isolate the cpu in this case, and it does, but it doesn't scale with higher resolutions. So.... what good is that value for everyone else if that's all they report? For example, it's reported there is a 10% increase at 720p, but at 1080p it's 3% (and potentially even less higher). I just don't see the value of the isolation of that number without sharing the rest of the resolutions.

Isnt 10 - 15 % an expected margin for a new cpu release?
It varies wildly.
 
Last edited:
Spoke to several of the senior chip designers at Hot Chips. They're quite pissed at Microsoft for screwing this up.

They also told me more about some of the various regressions and other peculiarities that I wrote about in my blog. Some of their more junior chip architects also showed up which I managed to catch.

I can't go into detail since some of these discussions were understood to be under my NDA. But in short:
  1. They know what's going on with games, but wouldn't comment further or tell me what the issue was.
  2. The core-to-core latency test is hitting a "weird situation". Based on what I learned, I told the Chips and Cheese guy who wrote the benchmark to rewrite it a different way to see if it changes anything without actually saying what the issue is. I can't comment further though.
  3. The 2-cycle SIMD latency regression was a calculated decision. My AMD contact implied months ago that I caught the chip architects surprise by "finding" the 2 cycle latency. I now have the full context on this story after hearing it from their side. It will not be fixed in future Zen5 chips since it's a design issue. (we all knew that already) I can't comment on whether it will be fixed/changed in Zen6 - though I have a feeling that the discussion may influence their decision on what the future (Zen6) will be.
  4. My "guesses" at the structure of the 512-bit store behavior is only half correct. I spoke to the guy who did the RTL for Zen5's store unit and he basically explained to me exactly how it all worked. (It was funny because he wouldn't say anything unless one of the lead Zen5 architects were there to authorize it.) Of course I can't say anything publicly. Though my confidence of Zen6 having 2 x 512-bit store has decreased from when I wrote the blog.
  5. They told me why only Granite Ridge has the 2c FADD while Strix Point does not. Can't comment further though.
  6. They said their FPU guys loved my section on the VP2INTERSECT and got a good laugh out of it. But no comment on whether they will keep that instruction going forward.
  7. One of the lead architects "thinks" he knows why it's so hard to get more then 4-wide decode on a single thread. But made no further comment.
  8. The load store guy said "he'll look into" why some of my load/store tests didn't reach full throughput.
From https://www.mersenneforum.org/node/22559#post1052498

Above post from writer of Y-cruncher, who has done the best technical look at Zen 5 that I'm aware of so far.

I do wonder what behind the scenes stuff is going on between AMD and MS. Was MS expected to have put the update into general release before Zen 5's original launch date?
 
What I'm hoping is that they go back and retest the workload stuff as well because that's what I'll be using my 9900X system for the most once I can build it,I hope they don't take to long to release the ASUS X870E ProArt mobo.
 
From https://www.mersenneforum.org/node/22559#post1052498

Above post from writer of Y-cruncher, who has done the best technical look at Zen 5 that I'm aware of so far.

I do wonder what behind the scenes stuff is going on between AMD and MS. Was MS expected to have put the update into general release before Zen 5's original launch date?

Some really in-depth technical write-ups on Zen 5 were done by ChipsandCheese and David Huang. I'm linking to C&C's desktop article but they have additional ones covering other Zen 5 variants as well.


Testing like this bothers me. 720p testing is only relevant at 720p (which 0.23% use). It's results are supposed to isolate the cpu in this case, and it does, but it doesn't scale with higher resolutions. So.... what good is that value for everyone else if that's all they report? For example, it's reported there is a 10% increase at 720p, but at 1080p it's 3% (and potentially even less higher). I just don't see the value of the isolation of that number without sharing the rest of the resolutions.


It varies wildly.

So there's a practical viewpoint (nobody games at that low of resolution) and the more academic viewpoint (want to entirely sure the GPU is out of the equation to compare CPUs, even if it's unrealistic). I see value in both approaches, except when reviewers do low res + low settings. In almost all games, at least a few of the settings depend on the CPU and turning those off/down makes the comparison nearly pointless. This basically just turns it into a draw call test, at which point you might as well just run a synthetic draw call test.
 
Most models are expected no earlier than mid-September. I assume the distribution already has many of them, but because of internal agreements, they can't sell them yet. Some stores may sell them earlier.
I don't expect anything exceptional from these motherboards. The main difference is USB4, which higher 600 series motherboards already have.
I guess we will see more new things from the new Intel series. I was counting on various changes in the new AMD, but we have to wait some more for something really new.

@wade7575
Why do you want the ProArt series? It looks pretty good, but I assume it's more like TUF with different covers, so it's a nice looking mid shelf that won't have as good support (BIOS etc.) as Strix or higher series. I can be wrong, but this is how ASUS has worked for years. On the other hand, there are so few changes that I doubt these motherboards will need many BIOS updates, and I don't think we will see any higher RAM too as all mobos are specified to be about the same as X/B 600 series. If I'm right, AMD also won't support CUDIMM (at least not at the start).
 
You'll see most x870 boards on 9/30... no earlier... at least reviews that show performance of these boards aren't allowed until 9/30. I'd expect most to release that day.

So there's a practical viewpoint (nobody games at that low of resolution) and the more academic viewpoint (want to entirely sure the GPU is out of the equation to compare CPUs, even if it's unrealistic).
I understand the purpose. :)

In this case, however, it takes an unrealistic situation in order to show/magnify a difference. So while it's neat to see academically, it doesn't do much in the end considering it doesn't scale and 'nobody' plays there.

Again, cool to see that data point, but, that's as far as I would take it. It's downright misleading offering that value only.
 
Last edited:
Yep agreed with the 720p piece. It's like measuring a Ferrari, McClaren, Bugatti, and Lamborghini and how quickly they can get to 100MPH on a neighborhood street. It tells you how fast they can get there in a never used use -case.

At least HWU uses 1080p, which is still a dominant resolution. But as I game at 4K I imagine the difference is minimal at best, but would be nice to see those numbers anyway in case it does make a difference. Although I think we generally agree that having a faster CPU just feels like opening more lanes on the racetrack for your graphics card to be able to stretch its legs
 
The free performance gain is always good. Free like no one expected that, even though if someone made it right, we could see it a long time ago. It feels like all were happy with the performance before this update, so it's just a cherry on top for all users.
Numbers seem to be mainly for marketing. 10 or 20% counts almost only when we miss some FPS. I doubt that most people with 100FPS+ care if they have ~15% more or less FPS. Most don't even know how many FPS they have and as long as it's smooth then it's fine.
It feels weird to see the market pushing 4K and at the same time trying to prove that something is so much better showing 720p/1080p results.
 
It feels weird to see the market pushing 4K and at the same time trying to prove that something is so much better showing 720p/1080p results.
It's marketing/for clicks... flavor of the week. The week before....psssssssssh! :p
 
It feels weird to see the market pushing 4K and at the same time trying to prove that something is so much better showing 720p/1080p results.
I can not over state how much work went into tweaking around apps n performance to get native 4k/24hz video playback on a Windows OS .

Hindsight makes it seem easy , tho a decade ago...

720p is not a valuable statistic from this perspective.

Retro Gaming?
 
I yanked my 5900X, installed my 58X3D, ditched 10, installed 11 24H2 and saw no real improvement for the things I do lol :D

Ugh.
 
Most models are expected no earlier than mid-September. I assume the distribution already has many of them, but because of internal agreements, they can't sell them yet. Some stores may sell them earlier.
I don't expect anything exceptional from these motherboards. The main difference is USB4, which higher 600 series motherboards already have.
I guess we will see more new things from the new Intel series. I was counting on various changes in the new AMD, but we have to wait some more for something really new.

@wade7575
Why do you want the ProArt series? It looks pretty good, but I assume it's more like TUF with different covers, so it's a nice looking mid shelf that won't have as good support (BIOS etc.) as Strix or higher series. I can be wrong, but this is how ASUS has worked for years. On the other hand, there are so few changes that I doubt these motherboards will need many BIOS updates, and I don't think we will see any higher RAM too as all mobos are specified to be about the same as X/B 600 series. If I'm right, AMD also won't support CUDIMM (at least not at the start).
The Strix has les PCIe slots then the ProArt and I need at least 2 for sure to run the stuff I want to run.
 
If you guys look at this video at the 2:22 point you can see where he mention's about VBS being turned off when it should be on for best performance.

He mentions VBS be turned by default in previous builds of Windows 11 and I'm not sure if it is turned off by default now when just the patch is used or if it is also turned off when you do a fresh install with an updated ISO.

He's say' you need to check and make sure VBS turned on to get the best performance.



In this video he talks about another update for AMD regarding the latency problems that affect the CCX and CCD's,from what I gathered he say's that with Zen 4 the latency was around 18 to 20ns( nano seconds ) and that in Zen 5 for the very same thing it is 180ns

 
Back