- Joined
- Aug 30, 2004
- Location
- Santa Barbara, CA
Analyze the relationship between shaders, raw clock frequency and the CPU bottleneck.
Folks,
I happened to discuss 3DM scores with somebody over at DFI street and I came across this peculiar behaviour.
3DM05
No ambiguity here. I score a bit more in all the GPU tests. Both of us didn't run CPU tests, so there is no mystery here. That is why my score is higher.
It could be higher despite being 8 pipes short, because a) My clocks are way higher than his b)My CPU does not bottleneck the SLI system as much as his.
3DM06
He beat me in 3 tests out of four. Yet, my score is higher than his becuase of the CPU. I have an Opteron 165 @ 2700MHz with 2Mb of ondie cache. Since 3DM06 can (?) take advantage of multi threaded and multi core CPU's, I score double of what he does. Since there is no way to disable CPU tests (as the two GPU tests follow the CPU tests), I cannot say what will happen to the scores. In this case, I'd say those 8 extra pipes he has are proving themselves.
Now, don't you see the discrepancy between 05 and 06? An interesting question would be to see how the pipe/MHz ratio has changed between 3DM 05 and 06. I mean to ask the question, "How many MHz is equivalent to unlocking 1 pipe" ? Clearly, since our calculations are not based on a fixed standard but with respect to a dynamic quantity, i.e 3DM scores, which is heavily dependent on the particular test algorithm used, so, there is no clear cut answer.
Now, is it right to assume that 3DM05 was more CPU dependent than 3DM06 (or) 3D05 had an implicit CPU dependence whereas 06 oes not? How do you go about seperating GPU Frequency effects from CPU Freq effects and number of active pixle pipelines? Is there any experiment which can isolate these three?
I still can't understand how he scored more than me in 06 graphics tests?
Ofcourse, the most simple reason would be that I had a bad run. But lets not look at the obvious.
Maybe 3DM06 favours rendering ability over raw speed? I'm not sure what to say.
Folks,
I happened to discuss 3DM scores with somebody over at DFI street and I came across this peculiar behaviour.
3DM05
No ambiguity here. I score a bit more in all the GPU tests. Both of us didn't run CPU tests, so there is no mystery here. That is why my score is higher.
It could be higher despite being 8 pipes short, because a) My clocks are way higher than his b)My CPU does not bottleneck the SLI system as much as his.
3DM06
He beat me in 3 tests out of four. Yet, my score is higher than his becuase of the CPU. I have an Opteron 165 @ 2700MHz with 2Mb of ondie cache. Since 3DM06 can (?) take advantage of multi threaded and multi core CPU's, I score double of what he does. Since there is no way to disable CPU tests (as the two GPU tests follow the CPU tests), I cannot say what will happen to the scores. In this case, I'd say those 8 extra pipes he has are proving themselves.
Now, don't you see the discrepancy between 05 and 06? An interesting question would be to see how the pipe/MHz ratio has changed between 3DM 05 and 06. I mean to ask the question, "How many MHz is equivalent to unlocking 1 pipe" ? Clearly, since our calculations are not based on a fixed standard but with respect to a dynamic quantity, i.e 3DM scores, which is heavily dependent on the particular test algorithm used, so, there is no clear cut answer.
Now, is it right to assume that 3DM05 was more CPU dependent than 3DM06 (or) 3D05 had an implicit CPU dependence whereas 06 oes not? How do you go about seperating GPU Frequency effects from CPU Freq effects and number of active pixle pipelines? Is there any experiment which can isolate these three?
I still can't understand how he scored more than me in 06 graphics tests?
Ofcourse, the most simple reason would be that I had a bad run. But lets not look at the obvious.
Maybe 3DM06 favours rendering ability over raw speed? I'm not sure what to say.