• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Question about SMP

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Culbrelai

Member
Joined
Oct 25, 2012
So I was thinking about this...

If dual chip VGA cards (590, 690, 7990, etc) share VRAM, that is, a 4GB 690 is really only 2GB, because both GPUs need VRAM.

So why doesn't my dual Xeon work this way? There is 12GB in each processor's memory bank, totaling 24GB, but it doesn't work the same way, why are CPU's able to access it's sister CPU's RAM, but GPUs are not?
 
Well, part of that is how SLI/CFx renders... AFR. One takes one frame, the other, another frame. You seem to be trying to make a correlation between two vastly unlike things as they sit. With changes to architecture and rendering methods, perhaps that thinking could be more valid.
 
Well, part of that is how SLI/CFx renders... AFR. One takes one frame, the other, another frame. You seem to be trying to make a correlation between two vastly unlike things as they sit. With changes to architecture and rendering methods, perhaps that thinking could be more valid.

Why are they vastly different? CPUs and GPUs from my understanding are very similar, but CPUs are better at one type of task and GPUs are better at another (I forget which is which, floating point is GPUs, I believe...)

is only one CPU is active untill you do something (heavily multithreaded)? What is the link between them called? What is the point of seperating the memory banks if the processors can use eachother's RAM?

It's just interesting to me, I'm spitballing.
 
The architecture and how each work with their memory are vastly different. To drop a terrible analogy in, its like a standard internal combustion engine versus an electric one. They both, in a car, put power to the ground, but in vastly different ways. They are both still motors though (/end fail analogy, LOL!).

Its really above my head, so no clue really. But there is a reason that your SR2 has two DIMM banks, one for each CPU. I would take a guess and say that there may be a slight latency increase if CPU A accesses CPU B's memory? Again, no clue.
 
CPUs are on the same board.

as for GPUs, they can only use the resources that are on the graphics card which they sit on. its like adding another computer into your desktop that does graphics processing (just so you know, they even have their own bios)

if there were two GPUs on the same graphics card like a Radeon 7990 or a Ge force 690, then yes the two GPUs would probably share the same with each other.

let me put it to you this way, daughter board (such as a graphics card) that has her* own bios will be unable to use resources from the motherboard or other daughter boards available in the computer she* sits in.

so to sum it up for you, SMP allows you to insert another processor in the board to lower CPU load and handle more programs (more ram also helps) while SLI and Crossfire allow you to get more frames since the first card makes an odd number of frames while the second card makes an even number of frames as show below

GPU 1 > 1 3 5 7 8 9 11
\/\/\/\/\/\/
GPU 2 > 2 4 6 8 10

basically, its like the two GPUs are makign a flip book in front of your face while the cpus are working together trying to solve your math problem

EDIT: as for the cpus (or their cores to be specific) just stay in idle until they are needed depending on how many cores the program asks for. at least thats what i learned.
 
The architecture and how each work with their memory are vastly different. To drop a terrible analogy in, its like a standard internal combustion engine versus an electric one. They both, in a car, put power to the ground, but in vastly different ways. They are both still motors though (/end fail analogy, LOL!).

Its really above my head, so no clue really. But there is a reason that your SR2 has two DIMM banks, one for each CPU. I would take a guess and say that there may be a slight latency increase if CPU A accesses CPU B's memory? Again, no clue.

Well, it does have crappy memory latency...

fascinating_zpsbf0aca9c.png

But that may just be because of its age (and I can't notice any slowdown over my Sandy Bridge 8GB RAM Laptop...)

Very interesting stuff, I hope someone who knows more than either of us chimes in soon =P

EDIT:

basically, its like the two GPUs are makign a flip book in front of your face while the cpus are working together trying to solve your math problem

I guess that makes sense, each GPU needs a frame to do its work, and these individual frames cannot be split, but the CPUs can split the math problems any way they like between them? Thus a more effective union?

I still would like to know why there are two RAM banks, and why/how they can use eachother's RAM while dual GPUs cannot. (relating to my first question, if they can use eachother's ram, then why have two memory banks, one near each processor?)
 
Back