• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Want to jump ship to amd

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

ozan

Member
Joined
Oct 10, 2012
Location
Istanbul
Yo,

So been meaning to dump the 980 while it is still worth a lot and get an amd which i prefer to deal with.

I kind of think i've seen quite a lot of graphics galore already and want to settle for a card that will run witcher 3 properly without huge compromises.

I've had a 7970 before and would prefer a card well above that one's power. I'm looking for a sweet spot in this matter so tips on that would be well appreciated.

Cheers
 
I'd look for a 290x on the cheap. There isn't much difference between it and the new ones aside from more ram
 
The only thing that'll perform close to the 980 is a Fury X though.
 
290/390X can compare more to GTX970.
My brother had no issues at all playing Witcher 3 on 980 so I'm not sure what is the issue. I can say that I had many more issues on 290X in most other games and it was one of the reasons why I sold it quick.
 
Ah, my bad if that was unclear. I have no problem at all with the 980. All silent and cool its awesome and nothing it doesnt play.

It just feels like nvidia card prices on second hand fall hard when next gen cards pop on the horizon. Also has to do with the dx12 business.

Also kind of not interested in eye popping graphics anymore with no more actually interesting games on the horizon(350 games on my steam atm)

Thanks for the replies with the 290x. Looks like a good card that one and shouldnt be too expensive.

Although i mentioned 7970+ r9 285 also looks yummy. And i dont think it will have a problem playing tw3.
 
What DX12 business?
The one game, in alpha stage, that utilizes AMD better than Nvidia?
 
What DX12 business?
The one game, in alpha stage, that utilizes AMD better than Nvidia?

Has to do with dx12 or vulkan allowing cards that are not alike work together which would make amd hella economic for that reason. Also the larger async communication with the card will probably discard the optimizations nv take pride in making for small buses. Which imo put amd a step ahead. Thats my logic in that and i loke amd more too^^
 
ou talking SLI/CFx 'not alike to work together'? That will be both camps eventually.

Large async comms? Not sure what you are saying there but I can tell you that 384bit bus is fine for 2560x1440 on down. If you are going 4K, you would be better served by Fury X and its HBM.

I sell things all the time. To be honest, I don't see NVIDIA cards being 'cheaper' on the used market. If anything, AMD cards are super cheap.

Not sure what you are saying either about the DX12 business...


I honestly don't see the point in moving to an AMD card at this time.
 
Amd being cheap is exactly the point. That probly will allow for cheaper amd cards to become optimus prime with their superior buses. Im not going on about resolutions at all. Im more interested in raw power of the card(s).

Dx11 has a limited communication with the card thus making it unflexible compared to dx12. This doesnt have to do with the current gen but the oblivious future.
 
I'm sorry... I still don't understand the point or if what you are saying even matters for what you are using it for. You are on about things that aren't factually correct or just a non factor.
 
Last edited:
I'm sorry... I still don't understand the point or if what you are saying even matters for what you are using it for. You are on about things that aren't factually correct or just a factor.

Nv puts out inferior hardware(based on the raw power) for the price which "just works" with superior drivers and lots of manipulation of the game market which lets them cripple competition repeatedly which strengthens the "just works" image. Tesselation, physx are the ones in the recent history.

Now i will directly quote the directx wiki. Im sorry i cant go through scientific papers for a simple forum post.

Pipeline state objects[115] have evolved from Direct3D 11, and the new concise pipeline states mean that the process has been simplified. DirectX 11 offered flexibility in how its states could be altered, to the detriment of performance. Simplifying the process and unifying the pipelines (e.g. pixel shader states) lead to a more streamlined process, significantly reducing the overheads and allowing the graphics card to draw more calls for each frame.

Direct3D 12 also learned from AMD Mantle in command lists and bundles, aiming to ensure the CPU and GPU working together in a more balanced manner.

Now we can assume amd cards with larger bandwidths can go nuts on dx11 and communicate to their hearts content. Wrong. With the constricted and limited pipeline the larger bandwidth becomes uncompetitive and amd who relies on less driver tweaks and optimizations has to build larger pipelines for their data to go through in time.

As the dx12 and vulkan emerges, the pipeline is expected to be largely liberated therefore largely reducing the need for pipeline optimizations.

Better chip is better, that is for sure but amd and nv's production of chips dont variate greatly by the means of nm and manufacturer. It's rather what they put in the chips and what they optimize it for.

I think nv is all in the dx11 train still and has research to do before the jump. Amd looks like a surer horse to me in the matter, although their crap economical state, i dont expect them to go under in recent time.

Cheers
 
Nv puts out inferior hardware(based on the raw power) for the price which "just works" with superior drivers and lots of manipulation of the game market which lets them cripple competition repeatedly which strengthens the "just works" image. Tesselation, physx are the ones in the recent history.

So faster GPU's that use less power with better driver support are inferior? Cool story.

Now i will directly quote the directx wiki. Im sorry i cant go through scientific papers for a simple forum post.

Okay, cool, just because AMD can do something doesn't mean that nVidia can't.

Now we can assume amd cards with larger bandwidths can go nuts on dx11 and communicate to their hearts content. Wrong. With the constricted and limited pipeline the larger bandwidth becomes uncompetitive and amd who relies on less driver tweaks and optimizations has to build larger pipelines for their data to go through in time.

As the dx12 and vulkan emerges, the pipeline is expected to be largely liberated therefore largely reducing the need for pipeline optimizations.

The pipeline optimizations, also known as data compression, allow nVidia to pass more data across a smaller bus than AMD can.
Neither of these companies are limited by the current bus capabilities though, so I don't see how your point is valid in any way, shape, or form.


Better chip is better, that is for sure but amd and nv's production of chips dont variate greatly by the means of nm and manufacturer. It's rather what they put in the chips and what they optimize it for.

You're right, better silicon design is better. And nVidia has the better silicon.
See above "more performance, less power" comment.


I think nv is all in the dx11 train still and has research to do before the jump. Amd looks like a surer horse to me in the matter, although their crap economical state, i dont expect them to go under in recent time.

If they only supported DX11 then how do you explain them having DX12 results?
And, as I said before, how does a single game in pre-alpha correlate to any sort of final results?


Cheers

Replies in red :)
 
IIRC, NVIDIA supports DX12_1 while AMD's current Fury cards only support DX12 (not 12_1).

As was mentioned above, the memory bandwidth doesn't matter until you run very high resolutions. For example, look at how the Fury X performs in THIS REVIEW. Notice how at 4K the performance gap is 2%, while at 2560x1440 its 9%? The HBM really comes into play at higher resolutions. Memory bandwidth isn't an issue.

Just make sure your choices are based on facts and an understanding of what you are talking about. I am afraid that in some cases, your decision is based off emotion and misinformation, leading you in the wrong direction.
 
Last edited:
Specs
ATMINSIDE,

Does anyone really care about power consumption in the high end? Sure it's for the future and the nature and all that stuff but we are talking about a niche market. There is no point in the power efficiency of the card. As long as the cooler is strong enough it doesnt matter.

Ofc I mustve said nvidia is completely ignoring dx12, NOT. I think they aren't prioritizing it as high as they should. And I never said there was ever any benchmarks relating to this subject but it was rather abstract, which you have confirmed. There is no real world proof or product yet and all we are doing is expecting something we perceive will happen.

And ED, as I said in a previous post that I am NOT talking about the current generation. Just played rise of the tomb raider. That game wont run very high textures on 4gb cards. The required memory for the cards are increasing at an ever rising rate. Bigger memory will need faster memory and faster bus.

Nv may say they are doing dx12_1. There is a ton of things they said. Like some certain 3.5 gb card that was sold as 4 gb in this gen if my memory serves me right. In the end, it's nvidia that screws over their customers who doesnt upgrade to current gen with new drivers.
 
From your own article:
Nvidia: A
Nvidia’s Maxwell GPU is power efficient and fantastically overclockable, with a hard-to-beat option in each pricing tier.

AMD: B
The new Fury X is strong, but the rest of AMD’s aging, rebranded cards are outmatched against the more efficient Nvidia GPUs, though priced well. AMD’s Fury cards coming in the fall may help level the field.

Have fun with the AMD cards guy. Its a solid choice ('better' is up in the air... cheaper, indeed), don't misunderstand me, but your supporting arguments are nonsensical.


Good luck. :)
 
Oh well, I wasnt talking about current gen again:D, yet anyways, thanks for the chat. Was fun^^
 
Specs
ATMINSIDE,

Does anyone really care about power consumption in the high end? Sure it's for the future and the nature and all that stuff but we are talking about a niche market. There is no point in the power efficiency of the card. As long as the cooler is strong enough it doesnt matter.

Yes, people actually do. It makes building in small form factors, but keeping high performance, a reality.

Ofc I mustve said nvidia is completely ignoring dx12, NOT. I think they aren't prioritizing it as high as they should. And I never said there was ever any benchmarks relating to this subject but it was rather abstract, which you have confirmed. There is no real world proof or product yet and all we are doing is expecting something we perceive will happen.

And ED, as I said in a previous post that I am NOT talking about the current generation. Just played rise of the tomb raider. That game wont run very high textures on 4gb cards. The required memory for the cards are increasing at an ever rising rate. Bigger memory will need faster memory and faster bus.

I'm running very high on 5760x1080 on a 4GB GPU, I have no issues with textures popping in/out.
There's a reason we suggest a 980Ti at minimum for 4K and a 980 at minimum for 1440p.


Nv may say they are doing dx12_1. There is a ton of things they said. Like some certain 3.5 gb card that was sold as 4 gb in this gen if my memory serves me right. In the end, it's nvidia that screws over their customers who doesnt upgrade to current gen with new drivers.

You mean the one that still has 4GB of VRAM and, even though the last 0.5GB is slower, is still much faster than backing up to system RAM?
Also, there are absolute slews of people out there that own 970's and are incredibly happy with them. There's nothing in the price range from AMD that competes with it.

Also, +1 to what EarthDog said.

And why would we have a discussion about anything besides current gen?
You're bringing up DX12, but then saying you're not talking about current gen, make up your mind.
 
Back