• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Question about mantle vs. DirectX 12

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

trents

Senior Member
Joined
Dec 27, 2008
Help me understand this debate about mantle vs. DirectX 12. As I understand it from the reading I have done, the AMD mantel technology is a low level API implementation that works with current Windows versions and DirectX 11 to give significant performance boost with apps that support it. And it is particularly beneficial in situations where the CPU is bottlenecking 3D gaming performance.

I also understand that AMD has offered this technology to be used by anyone but Nvidia is not jumping on the bandwagon because . . . Nvidia claims that most of what mantel does will be included in Windows 10 with DirectX 12 capability anyway. And is this true? Or is Nvidia just blowing smoke? Does mantel really deliver and if so why would people still be buying Nvidia cards?

Do I have this straight?
 
Upon further research it looks like AMD itself has abandoned mantel as a mainstream technology and relegated it to customers with "custom needs".
 
What I'm hoping for is tech that would use the combined VRAM of all cards in X-fire or SLI, not just the equivalent of one card's VRAM.
 
Dx12 will allow for better use of the cpu cores to help things out (40k foot view) So NVIDIA is more or le ss right. there are many other details though.

As far as memory combining in sli/cfx that will supposedly come with hbm2 memory on the next set of cards, but, I could be mistaking. It's all rumor now.
 
I hope this is on topic, but I was hoping someone could explain APIs at a high-level in terms of gaming.

I know a lot of games use Direct X. But whether or not someone has a nVidia or AMD card, they can play it. Is it the game that's designed around one API or another (Direct X, Mantle, etc) and then both GPU companies have to make sure their cards can use both, or is the game designed to use any of the major APIs and then whichever one your GPU has is the one that's used...that's kinda where my (lack of ) understanding concerning APIs sits.

Also, are OpenGL and the other ones that are check-marked on something like GPUz completely separate APIs or subsets of the main ones?
 
The better the API, the more performance you can squeeze out of your hardware. Game devs work with it to make a game that runs crisply and quickly. Most games have used DirectX and will continue to it seems. From what I remember reading about the Mantle article, it was attempting to simplify the access to low level programming and design. AMD vs nVidia are two different designs so each will have a different optimization solution.
 
what's excites me about DX12 is the possibility of using AMD and nVidia products both at the same time, hopefully we will also be able to use the combined VRAM i.e. 4gb + 4gb = 8gb instead of being restricted to 4gb
 
Ugh, if AMD/AMD and NV/NV don't work well together, I sure as heck don't want to see what a mish mash of the two will do!!!! Forget it! :)

I am excited about pooling the vram in SLI/CFx though. That gives those 1/2GB 1080p and up people more options to upgrade. And the 4GB people at 3x 2560x1440/4K people an upgrade path that doesn't cost an arm and a leg as well.

...... but I'm not a fan of multiple GPUs where a single GPU can do the job anyway.
 
True... but they are doing completely independent things of each other and not working together to form one image on a screen and getting it there in dramatically different ways (hardware). ;)
 
Back