• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Faster game loading times with faster CPU?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

mackerel

Member
Joined
Mar 7, 2008
Below was some testing I just did triggered on a discussion elsewhere about how much influence does storage performance has on game load times. That's when I thought, for now is loading time influenced by CPU speeds and will remain so until we get GPU offloading with directstorage 1.1? Before this point, I was of the opinion the CPU only needed to be fast enough to give the target fps in game. Now I'm thinking, if you really want to min/max your load times as well, going faster CPU might help. Has anyone else tested how CPUs affect game loading time?



Testing with the stand alone FFXIV: Endwalker benchmark, as it provides a load time indication and you can toggle the cache reset.

CPU: 11700k either stock (up to 5 GHz turbo for 1 core) or turbo off (max clock of 3.6 GHz on all cores)
GPU: 2070
Ram: 2133 4x16GB
Settings: 1360x768 Max (this isn't a high fps test, but I'm temporarily using a very old TV on this system!)

CPU stock (125W long term power limit)

Samsung 980 Pro load: 10.875s, 166.1 fps
Cached load: 6.186, 166.8 fps

CPU base (3.6 GHz)
Samsung 980 Pro load: 11.767s, 153.2 fps
Cached load: 7.105s, 155.5 fps

I think we can agree the Samsung 980 Pro is representative of a high end 4.0 SSD right? In either CPU state, the cached load (essentially ramdisk) is ball park 40% faster than from SSD. Comparing across the CPU states, off SSD was 8% slower with the downclocked CPU, and 14% slower from cache. So there are hints that even high end SSD speed here is not practically unlimited like a ramdisk.
 
I wouldn't think a CPU would affect game loading times much at all... even with a high-end NVMe-based M.2 device like your 980 Pro, a stock CPU should easily manage the bandwidth needed for a simple game load. Does this hold true with any other games or just the FFXIV benchmark? I don't know of any other games that actually test load times offhand so not sure what else there is.

That aside, I wonder if it's a thing for just that title or more. I think load times stock/overclocked will vary dramatically by game, so not sure we can paint a broad stroke across the CPU reduces/affects loading times quite yet.

So there are hints that even high end SSD speed here is not practically unlimited like a ramdisk.
Will you say this a different way? I'm not sure I understand what you're saying here...

What do you mean? NVMe drives are limited if by nothing else, it's specs. A stock 11900K should yield the same results as a 9900K as a 13900K (using the same everything else) for bandwidth and IOPS.



EDIT: I have a 980 Pro myself and a stock 12900K... when I get some time, I'll throw up the default value to compare. I know it isn't apples and oranges, but.... it's something.
 
Does this hold true with any other games or just the FFXIV benchmark?
This is the only test I have done personally. In a quick look I did find https://www.techspot.com/review/2117-intel-vs-amd-game-loading-time/ which is a couple years out now, or a generation or two in CPUs. With "fast" CPUs they didn't see a huge difference, but a "slow" CPU can negatively impact things.

Will you say this a different way? I'm not sure I understand what you're saying here...
Load times are presumed to be affected by a combination of storage speeds and CPU speeds. If you have a ramdisk, your storage speed is practically unlimited compared to a SSD. That I saw a significant difference between SSD and ramdisk suggests faster storage can give a benefit. I have no idea if that is because of latency, bandwidth or both.

If DirectStorage 1.1 adoption largely bypasses the CPU limitation, and we're instead hit by a (faster) GPU limitation, then storage speed may matter even more.

What do you mean? NVMe drives are limited if by nothing else, it's specs. A stock 11900K should yield the same results as a 9900K as a 13900K (using the same everything else) for bandwidth and IOPS.
It was some years ago when I first got Optane to play with. Random read speeds on that scaled non-insignificantly with CPU speed. My test system at the time was a Coffee Lake at 4.0 GHz stock vs 5.0 GHz OC.
 
So, only an AMD APU (and presumably Intel mobile SKUs) slowed this game's load times, it seems. A 'normal' processor does not? I wonder what a 5xxxG APU would do (2 generations newer than the APU tested by techspot).

Load times are presumed to be affected by a combination of storage speeds and CPU speeds.
I can agree with this. After seeing the techspots results, I'd have to wonder if other operations affect it. For example, the move to RAM, loading of graphics assets, etc. So, the storage is only so fast, almost regardless, then the CPU has to process it to the rest of the functions (does that make sense)? Game loading time is the culmination of all of that, however. So is it the storage that's limiting things or the CPU moving things that causes the differences? With a 'glass ceiling' on storage performance (that isn't a RAMdisk), I'd bet the latter...

We've already seen some tests with game loading with different storage on the same CPUs and, IIRC, most 'fast' storage (read PCIe/NVMe vs SATA SSDs) there wasn't a significant difference in loading time across most storage devices.

EDIT: https://www.techspot.com/review/2116-storage-speed-game-loading/
Today's game engines simply aren't built to make full use of fast storage, and so far there's been little incentive to optimize for PCIe SSDs. Instead, the main limitation seems to be things like how quickly the CPU can decompress assets, and how quickly it can process a level before it's ready for action, rather than how fast it can read data off storage.
So with that, it seems like it's a CPU thing (the differences), for most titles...?

It was some years ago when I first got Optane to play with. Random read speeds on that scaled non-insignificantly with CPU speed. My test system at the time was a Coffee Lake at 4.0 GHz stock vs 5.0 GHz OC.
Optane is a bit different than a M.2 module for storage, is it not? That socket is also hardwired to the CPU too (right?). If that's true, I wonder if CPU-attached storage is better/worse than PCH-attached options regardless if their bandwidth is able to be maxed on either.

So it feels safe to say, so long as you don't have an APU/mobile CPU and something halfway modern, everything else remaining the same, game load time differences wouldn't be significant(?). So while the thread title statement can be true, it takes a slow APU/mobile CPU to see any differences compared to modern desktop processors (so long as you aren't RAMdisking and Optaning (why, that isn't dead yet?) which few seem to do). Yes?
 
Last edited:
Optane is a bit different than a M.2 module for storage, is it not? That socket is also hardwired to the CPU too (right?). If that's true, I wonder if CPU-attached storage is better/worse than PCH-attached options regardless if their bandwidth is able to be maxed on either.
Optane SSDs are NVMe also, so no different from flash SSDs in that respect. I also noticed a measurable difference in performance depending if I connected it to CPU or chipset PCIe lanes.

Optaning (why, that isn't dead yet?)
It is dead. Can't remember exactly when but it was made formal within the last year or so.


Back on my original question, I do feel that with a modern-ish CPU that's at least mid range or higher, you're probably not going to see that much variation in loading times, but it doesn't seem to be a widely tested area. CPUs continue to get faster so it leaves open the door to it having more impact even disregarding DirectStorage 1.1 effects.
 
I have to be 100% honest... the only games that annoy me with load times is MS Flight Simulator (on it's own tiny NVMe) and Super People. Otherwise, off a 980 Pro NVMe, I barely get to take a drink and get my mousepad settled. :)
 
Back