• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Ashes of the Singularity - benchmark

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
If Ashes if anything to go by, this will revitalize strategy and fps games simply for the sheer amount of crap you can put on the screen, but not seeing much use outside of it (in gaming) unless you have very heavy workloads/textures. Haven't played Farmville in years, maybe ill go back to it :rofl:
 
Farmville maybe :rofl:
If you EVER say that word (Farmville) again on this forum..........

:mad::argue::fight::snipe:

:forecast:







LOLOLOL! I hate that 'game'. Specifically, I hate getting invitees from people when it was popular on FB (is it still?). My wife played it for a bit, but her mom, Oy... :rofl:
 
Did some multi gpu testing with my 290s and the conclusion is that 8350 isn't up for the task. There is pretty much no benefit running on two cards vs one other than having the ability to max out AA at 8x.
I get better performance with one 290.
 
AnandTech also had a look at Ashed of Singularity performance. Even with a Intel Core i7-4960X @ 4.2GHz benchmarking at 4K they were still CPU limited when testing multiple GPUs.

But the real spoiler is this:
80319.png

BAM! The Fury X goes from loosing to the 980 Ti to absolutely dominating it.
 
AnandTech also had a look at Ashed of Singularity performance. Even with a Intel Core i7-4960X @ 4.2GHz benchmarking at 4K they were still CPU limited when testing multiple GPUs.

But the real spoiler is this:
View attachment 175939

BAM! The Fury X goes from loosing to the 980 Ti to absolutely dominating it.


Anybody else find it interesting that the GTX 680 also spanks the 980Ti? And that chart is for 1080P. So my next card upgrade is going to stick me with AMD's driver-go-round again? LOL
 
Yay my R9 290 OC'ed to a 290x equivelent aaaaaaaalmost matches a 980TI :)
Not bad for a few years old and less than half the price.
 
Anybody else find it interesting that the GTX 680 also spanks the 980Ti? And that chart is for 1080P. So my next card upgrade is going to stick me with AMD's driver-go-round again? LOL
read the graph again. All it is showing is the difference between dx11 and dx12. It is not comparing cards dir3ctly. In other words. The 680 gained 21 fps when the 780ti lost 3 fps. It's does not state actual fps of the cards. ;)
 
read the graph again. All it is showing is the difference between dx11 and dx12. It is not comparing cards dir3ctly. In other words. The 680 gained 21 fps when the 780ti lost 3 fps. It's does not state actual fps of the cards. ;)

+1. Since dx12 was announced that it was said that it would benefit older systems the most. There's still a lot of peeps running around with 5yo-10yo GPU's that will get a kick out of it :)
 
Yes and no... Fermi has better compute and asynchronous shaders than maxwell...

... please don't quote me on that.. it is regurgitation....and I've started drinking, lol.
 
Yes and no... Fermi has better compute and asynchronous shaders than maxwell...

... please don't quote me on that.. it is regurgitation....and I've started drinking, lol.

So that's why the GTX 680 posted not so much better framerates than the GTX 780Ti but at least saw a significant increase in its DX12 scores over its DX11 scores?

It almost seems like buying AMD would be a more future proof option than buying anything by Nvidia at this point.
 
read the graph again. All it is showing is the difference between dx11 and dx12. It is not comparing cards dir3ctly. In other words. The 680 gained 21 fps when the 780ti lost 3 fps. It's does not state actual fps of the cards. ;)

DOH! I really shouldn't have missed that. So the Fury X is only that much better than itself , due to DX 12 , not necessarily the 980Ti. OK , every comparison I've read didn't get turned on it's ear.

+1. Since dx12 was announced that it was said that it would benefit older systems the most.

It would be nice to see an advancement in tech that actually makes my older , budget card even more bang per buck. Don't see that often!
 
Just to clarify the graph I posted for those to lazy to read the Anandtech article. :p It shows percentage improvement in frame rates when switching form DX11 to DX12. The general pattern is an improvement from AMD (with a huge improvement for Fury) and no difference for Nvidia, except for the 680. A result generally born out by benches posted earlier in the thread. The 50% improvement the Fury X sees is enough boost it to first place over the 980 Ti. At 1440p the lead is actually a nice 18%.

As for the 680: They speculate that it is related to the VRAM available since the 680 has the least memory in the of the cards tested.
 
Thanks mrA.. it's great to see you back!

(That's what I get for posting while halfway sauced!)
 
From what I have been reading Async Compute Support Is Missing From GeForce Drivers, there should not be any improvement yet for Nvidia.

Anandtech.com
NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.

Read more: http://wccftech.com/directx-12-async-compute-nvidia-amd/#ixzz41QQZbjWY

Video
 
Great read at reddit regarding NVIDIA and Async...


Conclusion / TL;DR
Maxwell is capable of Async compute (and Async Shaders), and is actually faster when it can stay within its work order limit (1+31 queues). Though, it evens out with GCN parts toward 96-128 simultaneous command lists (3-4 work order loads). Additionally, it exposes how differently Async Shaders can perform on either architecture due to how they're compiled.
 
After reading the whole column , I didn't get the idea that the issue is settled yet.

many users submitted their results.

That may give nVidia users hope , and may indeed be accurate , it still seems somewhat anecdotal.
 
Interesting if Nvidia has hardware support for it why haven't they put it into the drivers yet?

All up for competition and if Nvidia really does have a disadvantage with DX12 in the long run with their current designs oh well. I'm sure it will be fixed in future designs course might be a year or two which in the grand scheme of things with DX12 is pretty young. Props to AMD and their team for finally pulling ahead on something.
 
Back