• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

OC Needs more volt manually than with auto

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I did some extensive in game testing with my gtx 970 and my vega 56. What I did was run the ingame benchmark on ultra and very high preset five times each.
On ultra preset I get 15-20fps more on average with my vega 56, on very high it's about 10 - 15fps more.
I then also did some actual gameplay testing where I went to specific locations at specific times of the day.

Even though I was able to achieve higher max fps with my vega 56 I wasn't quite able to achieve a good average/minimum fps. For example I would get 48-55fps with my 970 but only 44-55fps with my vega.
Which is really odd because at the least it should give me the same minimum fps as my 970, or not?

generally speaking I did not feel much of a difference between those cards in actual gameplay other than the vega dropping lower in fps than my 970.

I also set the power target to +50 and locked the boost clock of my vega so that it would not throttle. Unfortunately this had no impact...

I am really at a loss here because when benching this card you can clearly see that it is performing much better than my 970 but when it actually comes to gameply it is performing about the same if not even a little bit worse.
 
Same boost clock on the GPU, and yes it has an integrated benchmark. Maybe the OS has an impact on this as well? Because he is still using Windows 7 while I'm using Windows 10.

On a side note: I think the in-game benchmark for this game is completely horrible because it doesn't represent the actual demand during gameplay.

While my GPU usage is pretty much at 100% during the whole benchmark and my CPU usage only hitting 70% at times, playing the game is a completely different story.
When I'm in a village the game seems to require huge amounts of CPU power and therefore my GPU usage can go down to 60% at times while my CPU usage is at 100%.

Only when I'm in open areas my GPU usage as well as my fps go up.

Well if your GPU utilization is lowering to 60% and CPU utilization is increasing to 100% in a demanding area of the map you need a CPU upgrade.

- - - Auto-Merged Double Post - - -

I did some extensive in game testing with my gtx 970 and my vega 56. What I did was run the ingame benchmark on ultra and very high preset five times each.
On ultra preset I get 15-20fps more on average with my vega 56, on very high it's about 10 - 15fps more.
I then also did some actual gameplay testing where I went to specific locations at specific times of the day.

Even though I was able to achieve higher max fps with my vega 56 I wasn't quite able to achieve a good average/minimum fps. For example I would get 48-55fps with my 970 but only 44-55fps with my vega.
Which is really odd because at the least it should give me the same minimum fps as my 970, or not?

generally speaking I did not feel much of a difference between those cards in actual gameplay other than the vega dropping lower in fps than my 970.

I also set the power target to +50 and locked the boost clock of my vega so that it would not throttle. Unfortunately this had no impact...

I am really at a loss here because when benching this card you can clearly see that it is performing much better than my 970 but when it actually comes to gameply it is performing about the same if not even a little bit worse.

Actual game play is more demanding on the processor time for a CPU upgrade.
 
Well if your GPU utilization is lowering to 60% and CPU utilization is increasing to 100% in a demanding area of the map you need a CPU upgrade.

True. But how is that I get lower minimum fps with my vega? I mean at least it should not be lower since this is obviously a much more powerful gpu.
So let's say I am at 100% cpu usage and I get 40 fps with my 970 shouldn't I get the same fps with my vega in this scenario? Because currently I may get like 36fps with my vega at 100% cpu usage.
This isn't happening all the time though, however it was definitely happening often enough to notice it.
 
True. But how is that I get lower minimum fps with my vega? I mean at least it should not be lower since this is obviously a much more powerful gpu.
So let's say I am at 100% cpu usage and I get 40 fps with my 970 shouldn't I get the same fps with my vega in this scenario? Because currently I may get like 36fps with my vega at 100% cpu usage.
This isn't happening all the time though, however it was definitely happening often enough to notice it.

It may be that the different handling by AMD hardware or software achieves a lower minimum FPS compared to GTX 970 when not in a benchmark running the game.

From what you have said everything with your PC sounds fine. If you wan't to try and see if there is a problem, do a clean install of windows 10 because the hardware is working fine with no errors.

Here are some benchmarks to compare. LINK:https://www.anandtech.com/bench/product/2373?vs=2148
LINK:https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_56/18.html
 
Wish I had seen this sooner.

Higher levels of loadline calibration (less vdroop) will require a higher VMIN (absolute minimum On-die VCC_sense) load voltage for stability than lower levels of loadline calibration (more vdroop). Higher levels of LLC cause huge penalities in transient response, which causes voltage to oscillate more between load states, even what seems like a continuous load state like prime95 is not true continuous load. This can cause a voltage drop to drop repeatedly lower than your VMIN point, if you were trying to aim at a certain on-die sense load voltage (let's say 1.25v). This becomes worse the higher the current (Amps) draw. It gets very bad with a 0 mOhms loadline (LLC8, Ultra Extreme, whatever), and worst case can cause both a 200mv voltage spike (microseconds in duration, which can not be picked up on a multimeter, only an oscilloscope) or a >150mv voltage DROP as well---which is an instant BSOD.

Here's a comparison of LLC6 vs LLC8 by Elmor done on a Maximus XI Gene, using 15K FMA3 prime95.

https://rog.asus.com/forum/showthre...bout-Transient-Response-(to-Shamino-and-Raja)

Worst case transients with a flat loadline can wind up looking like this (absolute worst case) overshoot.jpg

Obviously, better engineered boards will handle this better but no board will have perfect transient response with a 0 mOhm loadline. And again you need an oscilloscope to test this; you can NOT use onboard sensors!
I do not know how boards like an eVGA Z390 Dark, Rampage XI Apex or other overkill board will handle these. These VRM's were designed to have loadline (1.6 mOhms=intel defaults for 8 cores).

If you let the internal VR (AC Loadline) handle the automatic voltage boosting above the default VID (done only on adaptive/offset voltages) and keep VRM Loadline (Loadline calibration) at default, you get much better transients=lower VMIN load.
 
Back