• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Work Unit PPD performance data 4070 Supers

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

the garynator

Chief folding_monkey
Joined
Nov 16, 2002
Location
Neenah, WI
Been gathering / parsing / analyzing data for a bit now in an effort to quantify how much of a difference things like GPU overclock, CPU horsepower, PCIE bandwidth, average gpu boost clocks, gpu memory speeds, etc affect PPD.

Still have a long way to go before I can confidentally make any conclusions, but I feel I have gathered enough data now that it's worth sharing with you all. It has been a lot of work to get it this far as I've been doing most everything manually, so I apologize for some data not being included, but I've been refining my process and significantly reduced the amount of time it takes to parse out the data I'm interested in. I've currently been focusing on getting the average gpu clock speed as high as possible on the single Asus Dual 4070 Super I have in my main rig and comparing those stats. But the main goal is to quantify the impact that the things listed above have so we can squeeze the most PPD out of each card for the least amount of money.

The major variations @KeeperOfTheButch and I have observed between my system, his main system and his dedicated folding rig is really what started this. We were seeing as much as a couple million ppd difference between our two rigs. I recently got a hold of one of @KeeperOfTheButch 's log files from his main computer which has helped tremendously and will be getting one from his folding box tonight. What is nice is that he has 4 of the same 4070 supers (2 in each of his rigs) so it's very easy to see impact things like PCIE / CPU / heat / etc have with the cards being consistant.

Link to the spreadsheet: https://docs.google.com/spreadsheet...ouid=115846655810947021577&rtpof=true&sd=true

System specs related to the data in the spreadsheet:
Gary Main:
Ryzen 7 2700x overclocked to 4ghz
Asrock Taichi Ultimate motherboard PCIE 3.0, card running at x8 currently (have to move my raid cards to a different machine to get x16, but that'll be when i have time)
Asus Dual 4070 Super

Kyle Main:
Ryzen 7 7800x3d
MSI MAG x670e Tomahawk wifi motherboard
2x MSI 4070 Super Gaming X Slims
- Card listed as FS01 under the memory column is in the PCIE 5.0 x16 slot (running at PCIE 4.0)
- Card listed as FS00 under the memory column is in the PCIE 4.0 x4 slot
Both cards were at stock memory speeds for the data I've entered so far. The top slot card (FS01) had some issues with core restarts due to extra heat and that has since been rectified.

I'll update this with our observations thus far and theories later tonight when I have time. Not sure if this data is useful to anyone, but figured I'd share what we have so far. If any of you have any data / insight / observations on this stuff, please chime in. :)
 
It will be interesting to see what you come up with. ;)


I wonder about that... you'd have to look, only, at the same WUs, right? Otherwise....
Yup, and there appear to be differences from WU to WU as well. Best way would be to get a single one and run it multiple times, but IDK if f@h allows that anymore...i know they used to back in the days of sneakernetting WUs to offline computers. Getting a large dataset to work with and comparing WU PPD averages from the same projects does seem to be pretty good though.
 
Updated with data from @KeeperOfTheButch 's Dell Precision 5820 with 2 MSI 4070 Super Gaming X Slims in it. I believe they were running at stock clock speeds:
I don't remember what CPU is in it at the moment but it's an 8 core Xeon that benches about the same as a stock Ryzen 7 2700x. Both cards are in full speed PCIE 3.0 x16 slots
 
So........what are your theories/what are you taking away from this data dump? Were you able to run the same WU on all of the systems? With variability even in the same WUs...... I think that's the only way you'll get good data.
 
Thats not what I meant buy you are right. (y)
Then say what you mean....

...how does this affect me? Is that what you are asking?

It's, hopefully, going to show the differences between these two machines using the same video card. It will show differences (if any) between PCIe, CPU, etc. - it's all in the first post). Some of this we know already/info is out there, but, not for these cards. That said, I don't think the cards make too much of a difference in a lot of that, but...
 
Then say what you mean....

...how does this affect me? Is that what you are asking?

It's, hopefully, going to show the differences between these two machines using the same video card. It will show differences (if any) between PCIe, CPU, etc. - it's all in the first post). Some of this we know already/info is out there, but, not for these cards. That said, I don't think the cards make too much of a difference in a lot of that, but...
Never mind I have the avgs of my 2 gpus and my system is all at stock.
 
So........what are your theories/what are you taking away from this data dump? Were you able to run the same WU on all of the systems? With variability even in the same WUs...... I think that's the only way you'll get good data.
So, from what we're seeing, CPU and PCIE do make quite a big difference in these cards and as more and more data comes in, the numbers are actually lining up surprisingly accurately even with the WU variability. I've been mostly focused on projects 1822x as they have hardcoded checkpoints that hit extremely frequently on fast cards due to being set based on an i5 folding them. So around every 80sec on the 18221, 18223, and other ~600k WUs, every ~140-150 seconds on 18224. During these checkpoints, the gpu goes idle and I noticed PPD fluctuating by about 1mil on my machine every other percentage complete. The checkpoints only add about 1-2 seconds to TPF on the ~600k WUs and a few seconds on the 18224 WUs, which doesn't sound like much but due to loss of bonus points, as I previously indicated, drops PPD by about 1mil which evens out to around 50% of that with non checkpoint percentages.

On my Ryzen 2700x @ 4ghz, during checkpoints on 18224 WUs, my CPU spikes up to 100% across all cores, and around 60% on the ~600k point WUs. Kyle's main rig was doing significantly higher PPD and he was noticing less of a fluctuation, and his 7800x3d wasn't spiking as high. On the Dell Precision 5820 with the old 4 core cpu, the CPU usage spiked to 100% for much longer on 18224 WUs and hit 100% on the smaller WUs. So we upgraded the CPU to an 8 core Xeon w-2145 and PPD increased quite a bit. The spikes on 18224 WUs during checkpoints are still more significant than on my Ryzen 2700x but are about half as long as with the old CPU. For reference here's the CPU spike on the 5820 now:
1718032191344.png

The ppd differences from what I remember were about 4 million ppd vs Kyle's main machine. Now it's around 2 million difference.

This is what the CPU spikes look like on my machine for comparison:
1718032418992.png

Another theory I have is that GPU clock speed is most important to PPD, whereas GPU memory clock isn't. And with the way newer cards boost, TDP and temp are the limiting factors on average boost clock speeds. So I've been trying to maximize my card's average GPU clock speed. Thanks to @HayesK making a post about how his cards all have the memory underclocked, I decided to test my theory. Seeing as my card was running at max TDP pretty consistantly, I first tested to see if underclocking the memory would lower the average TDP, and it did. It also increased average GPU clock speed. So I've been slowly increasing GPU overclock to get to max stable, I believe I reached pretty close as at +190 GPU, and -300 Memory, when I turned my case fans down to quiet and GPU fans to stock overnight the other day, I got 1 core restart in folding. I've since cranked case fans back up and GPU fans to 90% and have not had any core restarts for a few days now.

I bumped the memory back up to -250 and am currently running a batch for a few days to see if my average GPU clock speed decreases. It probably doesn't seem like this would make much difference in the grand scheme of things, but if we can net a few hundred thousand PPD across 5+ cards, it adds up pretty quick.


I've also noticed, based on preliminary TPF analysis, that CPU / PCIe effects TPF on non checkpoints quite a bit too. I originally expected that it would only be checkpoints, but it appears not to be the case. I still have to do some work with that to quantify the differences vs checkpoints and whatnot but it seems to be significant enough to garner more review:
1718035037746.png
FS01 is @KeeperOfTheButch 's 4070 super in his PCIe 4.0 x16 slot, FS00 is in his x4 slot. Asus is my card in a PCIe 3.0 x8 slot at various overclocks.

Where does my F@H rig fall into all this?
I don't think it will matter for your cards as the PPD isn't impacted nearly as much.

Yeah, unless you have 4070s (what this data is about), this really doesn't matter to anyone else. Truth be told, I'm not sure what they're going to be able to walk away with from the data... but tagging along for the ride. :)

I think this will matter to anyone running current gen cards / cards that do like 10mil ppd+ as well as next gen cards. Here's my supporting data:
1718031453580.png

Results are still early, but if you notice, there's a 2 million PPD difference between Kyle's 7800x3d system and the Dell 5820 with the same cards. The interesting thing is, the 2 million ppd difference is reflected in the all units average as well as project specific (still a small dataset but will be interesting to see how it holds up with more data).

In Kyle's 7800x3d system, there is a 1-1.5m ppd difference between the card in his PCIe x16 slot and the one in his x4 slot. The Dell 5820 averages between both cards in full PCIe 3.0 x16 slots are much closer together. So this seems to imply that PCIe has a significant impact when running a fast CPU and fast card. My system seems to be right in the middle even though my card is currently running at PCIe 3.0 x8 (equivalent to Kyle's main system running PCIe 4.0 x4 on his lower performing card). So I'm thinking that CPU is more important than PCIe speed, but with the new cards on the horizon, I think you'll need at least PCIe 3.0 x16 slot / PCIe 4.0 x8 slot to avoid big hits to PPD.

The main takeaway from this is that there is a significant amount of PPD on the table when running current gen cards in older systems. As we are preparing for RTX 5xxx cards, this matters quite a bit when we are aquiring the base systems that will eventually be upgraded to them.

Hopefully I summerized everything well enough to convey what we are trying to accomplish. There are a lot of things I feel like I'm leaving out but this is already becoming a novel lol.
 
Back