The clocks on each vid card are stock speed 400/800. The CPU doesn't come into play much. As you can see, it actually gave less than a 400 point difference between 1.8ghz and 3.2 in the SLI benchmarks.
In real 3d applications the more powerful CPU makes a huge difference; however, the newer 3dmark benchmarks are designed to take the CPU out of it as much as possible. Since it would likely make even less a difference in single card mode, it indicates to me that the difference has to be because your card has 100mhz more on the core and memory. You have to think that i'm hardly even pushing this setup. The Cpu doesn't even go over 38C load.
I am kind of idly wondering in the back of my mind that with some serious tweaking if 10k could be broken. I mean say I tightened up the memory timings and added some memory voltage, overclocked it some, overclock the video cards, give the cpu some more voltage and speed.