• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

I feel like my build isn't performing at an optimal level - Advice appreciated!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

EliteACEz

Registered
Joined
Sep 10, 2014
Location
Sydney, Australia
Hi All,

First time posting here. Long time lurker.

I'll cut straight to it. I don't feel like my build is performing as well as it should be when it comes to gaming. First I'll share specs, some of my settings (please let me know if I can provide more detail), and then some of my benchmark results.

My build's specs:

Monitors:
  • ASUS VG278Q 27" FHD 144Hz G-Sync Compatible Gaming Monitor(1920x1080@144Hz) [Primary Monitor]
  • ASUS VG248 24" FHD 144Hz (not G-Sync Compatible) (1920x1080@144Hz) [Secondary Monitor, note I primarily game on my primary monitor in exclusive fullscreen for most games]


  • CPU: Ryzen 7 3700X + Wraith Prism Cooler
  • RAM: Kingston HyperX Predator RGB 16GB (2x 8GB) DDR4 4000MHz Memory
  • Motherboard: BSI B450 TOMAHAWK MAX AM4 ATX
  • GPU: ASUS GeForce RTX 2080 Ti ROG Strix 11GB Video Card
  • PSU: EVGA SuperNOVA 750W G2 80+ Gold Modular Power Supply
  • SSD: Samsung 850 EVO 250GB 2.5" SATA III 6GB/s 3D V-NAND SSD MZ-75E250 (OS installed here, not much else)
  • SSD: 2 more SSD's I've picked up since i originally did the build, similar to the Samsung one, I have some games installed on these
  • HDD: Seagate ST2000DM006 2TB BarraCuda 3.5" 7200RPM SATA3 (most of my games installed here)
  • HDD: WD WD4005FZBX 4TB Black 3.5" 7200RPM SATA3 (got this more recently, has some games installed on it)
  • Case: NZXT Phantom 630 Windowed Edition Ultra Tower Case - Matte Black


Some of my settings

1. BIOS and RAM
Firstly, I was running BIOS version 3.50 on my TOMAHAWK MAX until very recently. I had my RAM set to use the XMP Profile of 3600Mhz (running it on the other XMP profile of 4000Mhz caused Battlefield V and Need For Speed Heat to crash but apparently that's common with high frequency RAM and those games).

Since a few days ago I updated my BIOS version to 3.60 and the 3600Mhz XMP Profile appears to have been replaced, both my XMP profiles show the same 4000Mhz profile settings. So I just selected one of the 4000Mhz XMP profiles and haven't had any issues, even with Need For Speed Heat that usually crashes within 5-10mins I played for over an hour for good measure and had no crashes.


2. Nvidia Control Panel & Drivers
Also after my BIOS update the other day I gave the "Display Driver Uninstaller" tool a go. Completely removed my Nvidia drivers and software and downloaded and installed them fresh again. Hasn't made a noticable difference but thought it was worth a try.

A month or so back I came across this article and followed it: https://blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/14/ I didn't know about how the whole G-Sync stuff works, or Vsync, or what my settings should be. I definitely noticed a huge difference in performance in games when following this and applying the settings in Nvidia Control Panel as well as using RTSS.

My Nvidia Control Panel settings are as follows:
high-end-Nvidia-Gsync-settings.png


3. Windows 10 Settings & Software
I run Windows 10 Professional x64, I have disabled "Superfetch" in the services and I have Windows 10's Gaming Mode turned off.
I have Process Lasso Pro and use the default "Bitsum Highest Performance" power plan. I also use it's RAM SmartTrim feature to clear the standby memory past a certain point.
I have MSI Afterburner running with my own Fan speed curve so it ramps up a bit faster than the default, although temperature with my GPU has never been an issue (I've had the GPU since January).
I also have RivaTunerStatisticsServer running with a global framerate limit of 136. For some games I have created a specific profile where I framerate limit them further (e.g. Ghost Recon Wildlands, which runs nicely capped at 80 FPS running @1920x1080 all settings maxed, and 1.20x resolution scaling)


4. DxDiag
Attached to this post.

I can't think of anything else right now that might be relevant to determining any performance impacts. Please if there's other details I can provide let me know (and how I can retrieve/find them).


Benchmarks & Performance Generally

1. Cinebench Release 20
  • CPU (Multi-core) score: 4470
  • CPU (Single core) score: 490
  • MP Ratio: 9.13

2. Grand Theft Auto V
I've experimented with a mix of all the regular display settings maxed except MSAA set to 4x. And the Advanced Graphics all off (like extended shadows, and draw distance etc)
With those settings I would get mostly around 100+ fps most of the time, with drops as low as 70-80 during intense scenes in the city, explosions etc.

Recently I've tried playing with the Nvidia GeForce optimised settings, which basically maxes everything. I can maintain around 100+ fps generally, but it can drop as low as 57 fps I've noticed (haven't seen it drop any lower than that). This was particularly during fog weather where the FPS really took a beating. Otherwise it tends to average around 70-110 fps.


3. Dota 2
I'm throwing in Dota for good measure as it's the opposite end of the spectrum to GTA V. Dota 2 I run at my RTSS framerate limit of 136 FPS, 99% of the time. This is with maxed out settings, 1080p, 144hz. "some" big team fights might drop it to like 130'ish. But that's super rare and has to be A LOT of particle effects for there to be a drop at all.


4. Ghost Recon Wildlands
This is one of the games I've probably ran the in-game benchmark tool in the most. Like I mentioned previously, I'm pretty happy with how this game performs after a lot of testing and tweaking particularly with setting a framelimit cap in RTSS for a consistently stable and high FPS.

My last benchmark results (with framerate limit in RTSS of 80 FPS, and all settings maxed):
FPS:
  • Average: 73.79
  • Min: 64.87
  • Max: 78.14

GPU peak temperature: 62 degrees celcius

GPU usage:
  • Average: 91.4%
  • Min: 82.0%
  • Max: 95.0%

CPU usage:
  • Average: 31.3%
  • Min: 24.5%
  • Max: 46.0%

RAM usage:
  • Average: 3.3 GB
  • Min: 3.2 GB
  • Max: 3.4 GB


In-Game Settings
  • Resolution: 1920 x 1080
  • Resolution Scaling: 1.20
  • Window Mode: FullScreen
  • Refresh Rate: 144
  • VSYNC: Off
  • Framerate Limit: 120
  • Graphics Preset: Ultra
  • Antialiasing Mode: TEMPORAL AA
  • Ambient Occlusion: HBAO+
  • Draw Distance: Very High
  • Level of Detail: Ultra
  • Texture Quality: Ultra
  • Anisotropic Filtering: 16
  • Shadow Quality: Ultra
  • Terrain Quality: Ultra
  • Vegetation Quality: Ultra
  • Turf Effects: On
  • Motion Blur: On
  • Iron Sights DOF: On
  • High Quality DOF: On
  • Bloom: On
  • Godrays: Enhanced
  • Sub Surface Scattering: On
  • Lens Flare: On
  • Long Range Shadows: Ultra


5. Far Cry 5
This benchmark result was from just after I got the GPU. At the time I didn't know about the G-Sync optimal settings etc so in-game I have Vsync Off and Framerate Lock Off. This is all settings to Ultra, 1080p, 144Hz.

FPS:
  • Average: 107
  • Min: 82
  • Max: 144
  • Frames rendered: 6316


So I guess in conclusion, I'm just after feedback, advice, and suggestions on whether my PC is performing as it should be or if there is room for improvement. And if there is room for improvement, whether that be changes to RAM timings or something. What do I go about doing next?

Thank you very much in advance.
 

Attachments

  • DxDiag.txt
    108.8 KB · Views: 16
Last edited:
You do seem to be underperforming, I linked an article from kitguru below which shows a couple of the games you mentioned and you are way below the FPS they have. The main difference appears to be the cpu, they used an 8700k, but I would have said the 3700x is better than that so don’t know why it would bottleneck.

https://www.kitguru.net/components/...a-rtx-2080-ti-founders-edition-11gb-review/9/

They have an average of 130 FPS in wild lands and an average of 140 in far cry 5 both at 1080p. They actually get better FPS than you at 1440p.

Are you running the latest drivers for the card etc?






 
Thanks for the reply! Yes, I have the latest drivers. Part of the reason why I gave the Display Driver Uninstaller tool a go was because I thought maybe I was having a driver issue. But no noticable difference after I wiped the drivers and downloaded and installed the latest fresh.

One difference I noticed from the article you linked, they're using the "Very High" preset. in Wildlands I'm using the "Ultra" preset. Maybe that accounts at least a little for the difference in frames.
 
Last edited:
Hrm 2080Ti at 1080p, I'm not sure but I would guess a lot of situations are going to be CPU limited. You will see things like 30% CPU usage on a lot of games because they do not fully utilize multi-threading. So even if they can split some things off to other threads ultimately performance will come down to the maximum speed on a handful of threads.

Memory: over 3600 by default (some CPUs can push this a a bit) memory and infinity fabric frequencies become decoupled. This slows down the passing of information between different cores and cache on the CPU itself. Not sure what happened with the BIOS, most memory only has one XMP profile as far as I know, I'm guessing it was a default profile on the bios itself. Anyhow, I would suggest trying to run your memory at 3600 instead. Do you remember what the primary timings were at 3600 (you can usually leave the rest on auto).

What clock speeds are you reaching while gaming on the CPU? Using the stock cooler may hold single core boost clocks back a bit, but usually that is more associated with multi core loads. Either way it's worth seeing what you're boosting to and at what temperatures while gaming, to see if there is any headroom there.
 
Memory: over 3600 by default (some CPUs can push this a a bit) memory and infinity fabric frequencies become decoupled. This slows down the passing of information between different cores and cache on the CPU itself. Not sure what happened with the BIOS, most memory only has one XMP profile as far as I know, I'm guessing it was a default profile on the bios itself. Anyhow, I would suggest trying to run your memory at 3600 instead. Do you remember what the primary timings were at 3600 (you can usually leave the rest on auto).
I believe this is my exact RAM: https://www.kingston.com/dataSheets/HX440C19PB3AK2_16.pdf
I had the XMP Profile #2 running prior to my BIOS update. I'll put those timings in and see how I go.


What clock speeds are you reaching while gaming on the CPU? Using the stock cooler may hold single core boost clocks back a bit, but usually that is more associated with multi core loads. Either way it's worth seeing what you're boosting to and at what temperatures while gaming, to see if there is any headroom there.
I'll do some tests and report back.

Burr45 said:
Quote Originally Posted by EliteACEz
Maybe that accounts at least a little for the difference in frames.
The last sentence puzzled me.
I really don't know how much it would account for? is it really that big of a jump from Very High to Ultra in terms of stress on the hardware?
 
Memory: over 3600 by default (some CPUs can push this a a bit) memory and infinity fabric frequencies become decoupled. This slows down the passing of information between different cores and cache on the CPU itself. Not sure what happened with the BIOS, most memory only has one XMP profile as far as I know, I'm guessing it was a default profile on the bios itself. Anyhow, I would suggest trying to run your memory at 3600 instead. Do you remember what the primary timings were at 3600 (you can usually leave the rest on auto).

I completely forgot about high frequency memory and Ryzen causing issues. I know gamers nexus did a good article on it a while back.

https://www.gamersnexus.net/guides/3508-ryzen-3000-memory-benchmark-best-ram-fclk-uclock-mclock

Bit of a long read, but at the end they recommend using 3600MHz unless you are going to manually change timings and what not. However I’m not sure how much of a performance hit you get by running 4000MHz.





 
I completely forgot about high frequency memory and Ryzen causing issues. I know gamers nexus did a good article on it a while back.

https://www.gamersnexus.net/guides/3508-ryzen-3000-memory-benchmark-best-ram-fclk-uclock-mclock

Bit of a long read, but at the end they recommend using 3600MHz unless you are going to manually change timings and what not. However I’m not sure how much of a performance hit you get by running 4000MHz.

I saw a Linus Tech Tip video on this earlier that said the same, 3600MHz seems to be the sweet spot for Ryzen 3000's.

I tried to replicate the XMP Profile #2: DDR4-3600 CL17-18-18 @1.35V that I ran prior to my BIOS update but it failed to POST.

I used the mobo Mem Test feature and selected a different 3600 set of timings but I'm a bit out of my depth here. (I did manage to POST though)
CPU and RAM settings.png


Are these timings suitable for my PC? Just quickly calculated the timings in DRAM Calc.
DRAM-Calc-3600.png

UPDATE: I've entered the DRAM Calc ram timings and been able to POST. I hope I've done it right. (also set to recommended voltages)
ram_1.jpg
ram_2.jpg
ram_3.jpg
 
Last edited:
The calculator can work for some people, but it's not cut and dry. Most poster's here view it as over rated. You're using settings under "samsung B-die." The first step to correctly using the calculator is determining the IC that your memory uses. This can be done using the program Taiphoon Burner.

It seems odd that the PC would post on the tighter timings but not the XMP timings listed in the link. This isn't really my area of expertise in terms of suggesting specific timings, but I would suggest first restoring BIOS defaults, then plugging in only the XMP timings. You might try 18-18-18 or 16-18-18 as well.

You'll also need to verify that infinity fabric is in 1:1 mode.
 
The calculator can work for some people, but it's not cut and dry. Most poster's here view it as over rated. You're using settings under "samsung B-die." The first step to correctly using the calculator is determining the IC that your memory uses. This can be done using the program Taiphoon Burner.

It seems odd that the PC would post on the tighter timings but not the XMP timings listed in the link. This isn't really my area of expertise in terms of suggesting specific timings, but I would suggest first restoring BIOS defaults, then plugging in only the XMP timings. You might try 18-18-18 or 16-18-18 as well.

You'll also need to verify that infinity fabric is in 1:1 mode.

I confirmed with Thaiphoon Burner that my RAM are Sumsung B-die before I used the DRAM Calculator. Noted though, I thought the DRAM Calculator was more widely used/accepted than perhaps what it is.

If I have any issues with the current timings I'll try some different ones. My BIOS was reset when I did the update from 3.50 to 3.60 so the RAM is the only thing I've really tweaked other than disabling "eco-friendly" modes.

I do believe I have set the infinity fabric in BIOS to 1:1, there was a specific sub-menu in the BIOS for that when I was tweaking.
 
I've just run a benchmark of Ghost Recon: Breakpoint with the timings from DRAM Calc. That looks pretty good overall on "Ultimate", good frames. The CPU peak speed was hitting around 4100Mhz but mostly hovering between 3900 - 4000. The CPU EDC was close to "90 A", I have no idea what that means or if that's an issue?

Ghost-Recon-Breakpoint-Benchmark.png
 
I've just run a benchmark of Ghost Recon: Breakpoint with the timings from DRAM Calc. That looks pretty good overall on "Ultimate", good frames. The CPU peak speed was hitting around 4100Mhz but mostly hovering between 3900 - 4000. The CPU EDC was close to "90 A", I have no idea what that means or if that's an issue?

View attachment 210114

I think it was mentioned earlier and what you have just said may have confirmed that. The stock cooler may be limiting the cpu speed, as the max boost speed for that cpu is 4400, and I would expect it to boost to near that when gaming as it isn’t taxing all of the cores at once. I wouldn’t expect it to boost to that when running a cpu benchmark.
 
Your CB R20 score seems a low to me at 4470. I have the same CPU as you and at stock settings I get a little over 5000. I note that you are using the stock Wraith Prism cooler. Ryzen performance is very sensitive to temps and turbo speed will drop to stay within the power envelope. I am cooling my 3700x with a 240mm AIO water cooler. With Ryzen's, when the cooling is better, the cores can turbo higher for longer. Once your Tctl/Tdie temps exceed about 70c the turbo speed begins to decline. You can watch this happening during stress testing, I would suggest installing HWInfo64 where you can track this visually. Tctl/Tdie temps are not the only factor but it is the one we have control over. Also, what are your mosfet temps hitting? IntelBurnTest will give your temps up there pretty quick so you can check this out.

Edit: I mispoke. My Cinebench R20 scores are about 4766 at stock but about 300 points higher when OC'd to 4.25.
 
Last edited:
Yeah I'm not clear if that Ryzen Master pic is taken under load or not. We need to know what the CPU is boosting to under the actual gaming load. HWiNFO64 is good for that, as trents suggested, because it provides min, max and average values. With a lightly threaded load like that you really want to be boosting to nearly 4.4GHz.

I'm surprised that the stock cooler is doing so poorly so I think we should evaluate some basics before you upgrade your cooler. I would expect it to choke on a fully multi-threaded load like cinebench, but a game that seems to be loading 4 or so threads, not so much. What is your case airflow situation like? How many fans do you have, in what locations and orientations? You have a high wattage GPU that can really heat the air up. It sounds like you have a lot of 2.5" drives in the front, those and and drive cages are not helping either. (Good news is it looks like they are modular / removable). The easiest test is to compare temperatures with and without the side panel in place to see if temperatures improve dramatically. And of course this sounds silly, but are you sure that the cooler is mounted properly, all of the screws tightened evenly (using a cross pattern like lug nuts)? It may benefit to try to re-paste the cooler as well, since the factory paste application can dry out over time on the shelf, but it will likely need to be replaced to get the most out of your system.

EDC being close to 90A is fine, here is an explanation of what all of those numbers mean: https://www.gamersnexus.net/guides/3491-explaining-precision-boost-overdrive-benchmarks-auto-oc

Seeing the GPU at 89% in the Ghost Recon indicates to me that we're CPU limited. With a 3700x and 2080Ti that's going to be the case probably for any 1080p situation, but you can really try to max out your settings in order to improve this.
 
I would recommend getting some baseline numbers.

Bios all auto, memory to 3600mhz.
nvidia control panel to all default.

Run cinebench and record its temp, and max boost in the single core mode and multi core mode.
Then run whatever canned GPU benchmark you like and see how it fair.

After that switch your settings to whatever overclocks and settings you like and repeat. It will give us three simple to track data points to see how stuff changes.

My hunch is 1080p is bottle necking on your CPU and your CPU is bottle necked by the stock cooler.
 
Hi All,

Many thanks for the replies, I've been doing a bunch of testing and tweaking today. I eventually settled on restoring optimised defaults and applying the suggestions made here
That has given me the best results so far of all the various things I've tried.

I've started using the Hitman 2 benchmark as well as Cinebench. Hitman 2 is quite demanding and after benchmarking that several times today, the best result was an average FPS of 65 in Ultra settings, DX12, 1080p. That was actually smooth as butter, one particular scene of the benchmark there's a large car crash. With all my previous settings the PC would lock up for a moment as that happened. With the latest settings from the URL above the FPS didn't drop at all during the car crash scene. The CPU is peaking at around 4,200 Mhz now during the Hitman 2 benchmark (with temperature never exceeding 65 degrees celcius). Previously it would barely reach 4,100 Mhz and then drop to 4,000 Mhz or below.

As a few of you have mentioned temperature could be an issue. From all the testing and benchmarking today the CPU or GPU haven't been running hot at all. My case generally seems to have okay airflow. I have double checked the Wraith Prism cooler is properly mounted, so that all looks good. As far as I can tell temperature isn't an issue here.

The comments regarding bottle necking or performance loss because of 1080p resolution. Is it possible for something as simple as using Nvidia's resolution downscaling on games to reduce the bottle necking? if I'm understanding correctly (please correct me if I'm wrong), an 'increased' resolution would force the CPU to handle more of the processing?

Short of buying a liquid cooling solution to potentially push the CPU further. Are there any other tweaks that might yield improved performance? what's the stance on SMT? is there any benefit to having it off? in terms of CPU affinity, having games run on select cores would that make any difference here rather than all cores?

My other question is, if everything is currently running fairly okay. Is it worth reducing the voltages a bit? even if my temps are looking okay? My current settings are as follows:
current-Ryzen-master-settings-16-5-2020.png
 
Last edited:
Yeah I'm not clear if that Ryzen Master pic is taken under load or not. We need to know what the CPU is boosting to under the actual gaming load. HWiNFO64 is good for that, as trents suggested, because it provides min, max and average values. With a lightly threaded load like that you really want to be boosting to nearly 4.4GHz.

Running Cinebench multi-core currently returns a score of around 4565.

Temps during load are as follows:
cinebench-results-HWiNFO64.png
cinebench-results-HWiNFO64-2.png


Here's the Temps & Cores during Hitman 2 benchmark:
hitman-benchmark-1.png
hitman-benchmark-2.png
hitman-benchmark-3.png

What is your case airflow situation like? How many fans do you have, in what locations and orientations? You have a high wattage GPU that can really heat the air up. It sounds like you have a lot of 2.5" drives in the front, those and and drive cages are not helping either. (Good news is it looks like they are modular / removable).
I have a single large fan at the front of the case pulling air in. A large fan at the top of the case pulling air out. Surprisingly that's it (my previous PC had a few more fans than that, and liquid cooling (Hi 120 or something?) for the CPU) and seems to remain fairly cool even during a long gaming session.

And of course this sounds silly, but are you sure that the cooler is mounted properly, all of the screws tightened evenly (using a cross pattern like lug nuts)? It may benefit to try to re-paste the cooler as well, since the factory paste application can dry out over time on the shelf, but it will likely need to be replaced to get the most out of your system.
Have double checked the Wraith Prism is firmly mounted. Screws are nice and tight and cross pattern. How long typically would it take for paste to dry out? I build this PC early January this year so not sure if it's necessary to re-paste just yet.
 

Attachments

  • cinebench-results-HWiNFO64.png
    cinebench-results-HWiNFO64.png
    149.7 KB · Views: 83
  • hitman-benchmark-2.png
    hitman-benchmark-2.png
    138.7 KB · Views: 83
Last edited:
The comments regarding bottle necking or performance loss because of 1080p resolution. Is it possible for something as simple as using Nvidia's resolution downscaling on games to reduce the bottle necking? if I'm understanding correctly (please correct me if I'm wrong), an 'increased' resolution would force the CPU to handle more of the processing?
A higher resolution will offload more work to the GPU. I was able to maintain similar frame rates on my current rig when i upgrade from 1080p to 1444p. Since my GPU still had leg room in it.

Short of buying a liquid cooling solution to potentially push the CPU further. Are there any other tweaks that might yield improved performance? what's the stance on SMT? is there any benefit to having it off? in terms of CPU affinity, having games run on select cores would that make any difference here rather than all cores?
If you are happy with performance currently and temps are fine, no need to get a water cooler. AFAIK there isn't a benefit to turning SMT off, it would likely depend on what title you are playing.

My other question is, if everything is currently running fairly okay. Is it worth reducing the voltages a bit? even if my temps are looking okay? My current settings are as follows:
IF you are happy with the current performance and temps are fine, no need to undervolt IMO.
 
A higher resolution will offload more work to the GPU. I was able to maintain similar frame rates on my current rig when i upgrade from 1080p to 1444p. Since my GPU still had leg room in it.


If you are happy with performance currently and temps are fine, no need to get a water cooler. AFAIK there isn't a benefit to turning SMT off, it would likely depend on what title you are playing.


IF you are happy with the current performance and temps are fine, no need to undervolt IMO.

Ah I see, interesting. I think I'm happy to stick with how things are currently then. Some of the triple A titles I can reliably hold 80-100 fps (Ghost Recon Wildlands, Ghost Recon Breakpoint, GTA V etc) so that's fine for me.

I greatly appreciate all the help and advice!
 
Looks like temps are great. It's really not a worry about which is the limiting factor as long as you are getting acceptable performance. If you actually wanted to get more out of the GPU, you could enable super sampling instead, although I'm not sure how much of an effect it will have.

Do you know what changed other than resetting the memory?
 
Back