• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

DX12: Multiadapter functionality

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

magellan

Member
Joined
Jul 20, 2002
http://www.pcper.com/reviews/Graphics-Cards/BUILD-2015-Final-DirectX-12-Reveal

There are two modes of multiadapter functionality: implicit, explicit (which has two modes of its own).

So would this feature of DX12 mean SLI/Crossfire would become obsolete, at least for DX12 games?

"The Unlinked Explicit Multiadapter, is interesting because it is agnostic to vendor, performance, and capabilities -- beyond supporting DirectX 12 at all. This is where you will get benefits even when installing an AMD GPU alongside one from NVIDIA."
 
From what I read the combining of IGU and discreet GPU is the big point for DX12. Also
You could already do that at some point with AMD iGPU and AMD discrete GPU... I believe it was part of the Lucid Virtu technology?

The big point in DX12 is the combining of vRAM pools into one, as well as being able to mix and match GPU brands. While it has proven to work (benchmarks are around) I'm not sold and would prefer not to mix oil and water (NV and AMD). Call me a skeptic. ;)
 
I would not want to run a vRAM pools into one crossing over the busses of the video cards that sounds bad.
 
Now that, to me, seems like the EASY part (sharing ram with like cards). The hard part is getting apples (AMD) to communicate with oranges (NVIDIA) for multi-GPU.
 
I have not even seen a demo of AMD Nvidia Alternate frame rendering? If you think about it there is a reason they don't pool memory is because both cards are working equally and if they don't have the exact data on each card you would have a delay while it is transferring to the other video card, you would get a stall, like when the video card pulls from system memory. Each card has it's own memory controller, if you had one controller for two cards I would see it working well.

I don't see software saving anything for Alternate frame rendering.
 
I thought the plan was to use SFR? I don't recall from my reading of the anand article...
 
Split Frame Rendering takes accurate timing the video cards have to be timed exactly or you have tarring and stutter, Nvidia has not used Split Frame Rendering since DX9.

DX12 Implicit Multi-Adapter, they have to have the hardware venders work together to combine video cards or IGPU.

DX12 Explicit Multi-Adapter is not going to happen for a long time or never, the hardware is not capable, maybe in the future they will figure out how to have the hardware possibly work.

If this sounds very vague that’s because it is, and that in turn is because the explicit API outstrips what today’s hardware is capable of. Compared to on-board memory, any operations taking place over PCI Express are relatively slow and high latency. Some GPUs in turn handle this better than others, but at the end of the day the PCIe bus is still a bottleneck at a fraction of the speed of local memory. That means while GPUs can work together, they must do so intelligently, as we’re not yet at the point where GPUs can quickly transfer large amounts of data from each other. http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview/2

With the hardware now they can do post-processing, it is not SFR or AFR and the gain is vary minimal.

post-processing Demo.

EpicDX12.jpg
 
Lol, that's the artiest I was referring to when I was talking SFR.

Well aware of the memory over the pcie bus being slower and having to do it more intelligently. I still formed my opinion that I am not worried about pooling of memory. ;)
 
EarthDog you should read what I posted above, the hardware out there is not even capable of Multiadapter let alone memory pooling with a performance improvement unless they use post processing only. Where do you pool the memory anyway? From what I have been reading it will add latency and only slow performance. I read Anandtech article many times about DirectX 12 Multi-GPU he even got confused about the term SFR (Split Frame Rendering)I had to look it up to make sure it was the same as new DX12 Explicit Multi-Adapter he was talking about.

Split Frame Rendering diagram, What a mess to implement for the reason of timing things change from frame to frame in the games.

SFR.jpg
 
You could already do that at some point with AMD iGPU and AMD discrete GPU... I believe it was part of the Lucid Virtu technology?

I vaguely remember some tests showing that trying to crossfire an AMD iGPU and AMD discrete GPU only worked when the discrete GPU was nearly as bad as the iGPU. I also remember a half-way decent discrete GPU was the much better performance option.
 
Wingman has this correct, PCIe bus is very very bad for this kind of technology. In very simple terms, a lot of devices are talking over the PCIe bus, and the buffer on these devices are nearly full (especially for heavy transactions made by GPUs). If you are trying to add in this layer on top, than you are pushing the PCIe bus to its core limits. Not to say this kind of tech could come to light, it will just take a couple years before we see it working without fault.

I guarantee PCIe won't go away, not with what Intel is doing with it.
 
The key, Dolk, is doing it "Intelligently" (from the article). Its not the best, but can be done. ;)
 
Split Frame Rendering takes accurate timing the video cards have to be timed exactly or you have tarring and stutter, Nvidia has not used Split Frame Rendering since DX9.

DX12 Implicit Multi-Adapter, they have to have the hardware venders work together to combine video cards or IGPU.

DX12 Explicit Multi-Adapter is not going to happen for a long time or never, the hardware is not capable, maybe in the future they will figure out how to have the hardware possibly work.



With the hardware now they can do post-processing, it is not SFR or AFR and the gain is vary minimal.

post-processing Demo.

View attachment 176580
This is with the integrated graphics chip in the CPU and the GPU? It might not be a huge gain but if the computer has the integrated graphics chip already it's a nice bonus I guess. :D
 
Wingman has this correct, PCIe bus is very very bad for this kind of technology. In very simple terms, a lot of devices are talking over the PCIe bus, and the buffer on these devices are nearly full (especially for heavy transactions made by GPUs). If you are trying to add in this layer on top, than you are pushing the PCIe bus to its core limits. Not to say this kind of tech could come to light, it will just take a couple years before we see it working without fault.

I guarantee PCIe won't go away, not with what Intel is doing with it.

This makes sense. The strange thing is that Intel makes add-in PCIe boards (w/multiple CPU's installed on them) for HPC apps. Then there's the whole issue of HPC apps that run on GPU's. Then again, I'd imagine the latency of GPU's communicating directly over a PCIe bus has to be a lot less than HPC nodes communicating over even 10 GbE.

Doesn't Nvidia still use special SLI connectors for inter-GPU communication? Does this special SLI bus have greater bandwidth or less latency than PCIe?
 
@ed, Thats exactly what Intel is doing. PICe is like I2C gone super saiyan. Its faster, with a bounded limit of TX and RX lines, along with channels, multi master, and packet reorganization, its one of the most universal buses that the modern computer has. However, there is a performance issue as this all relies on queuing in a buffer. The bigger the buffer the more commands, and the faster it is, the better (burst 0101). Now you could move around that and create a smarter controller, but that performance increase can only last for so long before you are forced to look at a new area. New technology can continue to keep a system a live for many years when it should have been dead a long time ago (silicon based CPUs), but every once in awhile something new comes along and can stop all production. I hope that happens with GPU bus links, but by the time that happens AMD will have figured out how to put an entire GPU onto a CPU.
 
@ED, Thats exactly what Intel is doing. PICe is like I2C gone super saiyan. Its faster, with a bounded limit of TX and RX lines, along with channels, multi master, and packet reorganization, its one of the most universal buses that the modern computer has. However, there is a performance issue as this all relies on queuing in a buffer. The bigger the buffer the more commands, and the faster it is, the better (burst 0101). Now you could move around that and create a smarter controller, but that performance increase can only last for so long before you are forced to look at a new area. New technology can continue to keep a system a live for many years when it should have been dead a long time ago (silicon based CPUs), but every once in awhile something new comes along and can stop all production. I hope that happens with GPU bus links, but by the time that happens AMD will have figured out how to put an entire GPU onto a CPU.

I had thought GPU's on CPU's has been done for a while? Unless you mean high-performance GPU's.
 
Back