• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

The Effects of Benchmarking Bias

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Okay, I'll try one last time to explain this. First, you keep confusing the differences in micro-architecture with the actual instruction sets themselves. As I've stated before, there isn't an "SSE2-Intel" or "SSE2-AMD" etc. there is just "SSE2". The instruction sets themselves are standardized and for good reason. Think about how bloated software would be if every time there was a micro-architecture change they had to add code for that specific CPU model. It couldn't ever possibly work. As Hitman tried explaining, an instruction set is just exactly that a set of instructions. How quickly or efficiently the instruction sets are completed is entirely dependent on the micro-architecture NOT the actual instruction sets themselves.

http://en.wikipedia.org/wiki/Instruction_set



I.E. the micro-architecture is changed to run the instruction sets, the instruction sets are not changed to run on the micro-architecture.



I don't know if that's what Aida is doing or not. I just created a "hypothetical" to show how unefficient and restrictive programming for multiple CPU "vendor ID's" would be as well as the inherent problems it would create vs. coding for "instruction sets".

You know I tried to back off of this Bubba and honestly we are talking about two different side of a coin here.

How many virtual processors have you made? It is very relevant to this. I ask you a question or address you and I get Angers blog or wiki. I want you to answer me. I tried to drop out of this part of it you know. We are talking about different things here and you seem to think that if there are registers for an instruction set on a CPU that it is the same implementation as all other CPUs and though the exact same functions may be run the speeds are not the same. This tells us that there is much more to it that just running the code if the CPU says I can run it. Hell the CPU may be faster running as standard int or float than it is using the (insert extension) registers. I really have no idea and evidently you have less of an idea about this than I do.

Like I said we are talking about different things but as to your point I agree that Intel should not intentionally cripple code but I am not talking about that and that is the issue here. It is completely on me; I moved outside of the scope of this thread and for that I am sorry.
 
Ok Archer, I get more of what you're saying now. The timing of your comments made it confusing as everyone was talking about one thing and then you wanted to bring up a different point and I think the transition never really happened in the conversation so there was a lot of confusion.

I agree with your point to an extent. What I believe you are saying is, even if both architectures are executing SSE2 code, one may do it far more efficiently than the other or there may be other places in the code that is slowing one cpu more than another. So, if you're just looking at pure performance %, it may seem like something questionable is happening when really you can't know for sure without looking into the actual code.

If that is what your point is, I think it's valid but only to a certain extent, like I said. If you take the same cpu, mark it as an AMD, then mark it as an intel and see a 25% improvement, obviously something is wrong in the code not allowing an AMD cpu to run as it should, intentional or not, and that software cannot be trusted if used as a benchmarking tool. However, if the difference is only say 5% or so, there's far too many variables in what could be happening to say if there is a problem or not. For instance, perhaps a portion of code is executed differently on an AMD cpu due to compatability or stability issues which causes a bit of a slow down for that portion, we can't say. However, I feel that if that delta ever rises above 10%, there is most likely something very wrong/shady going on and at the very least the developer should be contacted to see why the discrepency is happening and that program should not be used for benchmarking until an acceptable answer is received.
 
Ok Archer, I get more of what you're saying now. The timing of your comments made it confusing as everyone was talking about one thing and then you wanted to bring up a different point and I think the transition never really happened in the conversation so there was a lot of confusion.

I agree with your point to an extent. What I believe you are saying is, even if both architectures are executing SSE2 code, one may do it far more efficiently than the other or there may be other places in the code that is slowing one cpu more than another. So, if you're just looking at pure performance %, it may seem like something questionable is happening when really you can't know for sure without looking into the actual code.

If that is what your point is, I think it's valid but only to a certain extent, like I said. If you take the same cpu, mark it as an AMD, then mark it as an intel and see a 25% improvement, obviously something is wrong in the code not allowing an AMD cpu to run as it should, intentional or not, and that software cannot be trusted if used as a benchmarking tool. However, if the difference is only say 5% or so, there's far too many variables in what could be happening to say if there is a problem or not. For instance, perhaps a portion of code is executed differently on an AMD cpu due to compatability or stability issues which causes a bit of a slow down for that portion, we can't say. However, I feel that if that delta ever rises above 10%, there is most likely something very wrong/shady going on and at the very least the developer should be contacted to see why the discrepency is happening and that program should not be used for benchmarking until an acceptable answer is received.

Yeah that is what I was saying for the most part and I do think these tests are a good thing but Bubba and I were just talking about different things.

I really would like to see a code breakdown done professionally because people that are not paid to do it do not have the time.
 
You know I tried to back off of this Bubba and honestly we are talking about two different side of a coin here.

How many virtual processors have you made? It is very relevant to this. I ask you a question or address you and I get Angers blog or wiki. I want you to answer me. I tried to drop out of this part of it you know. We are talking about different things here and you seem to think that if there are registers for an instruction set on a CPU that it is the same implementation as all other CPUs and though the exact same functions may be run the speeds are not the same. This tells us that there is much more to it that just running the code if the CPU says I can run it. Hell the CPU may be faster running as standard int or float than it is using the (insert extension) registers. I really have no idea and evidently you have less of an idea about this than I do.

Like I said we are talking about different things but as to your point I agree that Intel should not intentionally cripple code but I am not talking about that and that is the issue here. It is completely on me; I moved outside of the scope of this thread and for that I am sorry.

@ Hitman, the problem is he's been arguing from the standpoint from the first page that Intel did nothing wrong at all, never should have settled, and has been trying to poke holes in all of this the whole way. I've been trying to explain why his first statement (and many subsequent statements) is wrong but it's been like :bang head

I'm beginning to think that no matter how we explain it or how much information we give him it won't really matter as he's showing a refusal to believe that Intel did something shady and underhanded with it's compiler despite all the evidence and information to the contrary.

Now if I tell a program that is is working with one CPU instead of the other and I am using the same CPU for all it does make a difference as it pertains to registers and system calls does it not?

Perhaps I am missing something here but to tell a program that an Intel is and AMD and an AMD is an Intel is not a legitimate test because they are different. Registers are not the same and how the program approaches functions would differ would they not? I see what you are doing but I just do not see it working out the way you plan.

And let us not forget the AVX instructions in which AMD FX cpus have tested slower on. The fact is Intel made the compiler for Intel chips. So unless your chip is exactly the same as an Intel chip you will not bench the same as in intel chip. As far as I know the clones stopped with i486.

I think the thing to do if you really want to see what is going on is get benchmarks from the days when AMD was ruling the roost. If the Intel's come out with a lead that matches modern testing software percentages then the testing being done here is just an exercise and nothing more.

You know I actually went through all of this in a GD thread a few years back. Intel compilers are for Intel processors. It is like saying the PhysX should run on AMD cards. Nobody is forcing software vendors to use the Intel Compiler.

Well gee how do they get to be 100% compatible? Clone!

Like I said just go back a bit and run some older benches from the days when AMD ruled the roost. That will give you the answer. If it pans out that the AMDs do compete well with the Intel's then continue your quest.

EDIT: By the way do you know what a compiler is? Yes I went to collage for programming.

SSE2 SSE3...... :blah: are the same thing on whatever CPU they happen to be resident on, are they not?

Its all software code and operating systems.... when they ask for code, math and what not they are expecting an answer, not a possible 2 or 3 for the same thing.

They serve the same function but they are not the same.

Hey it seems like a smear thing and I am interested in your tests. I could be wrong and I am glad to be because it keeps me honest.

As to your question? They did it to stop the winers who got big brother to step in and they are now serving other manufacturers with the best compiler out there.

Well it came out wrong on my end. But the fact is no matter anyones opinion it has be ruled on and that is it. So my view is just that, a view and I disagree with Intel's capitulation.

Why? Well it has been beaten like a dead horse and it is now revived.

Now on with the tests!

@ Archer -

To say that the instruction sets are the micro-architecture is like trying to say that the instructions that comes in the box with a model is the model itself. A CPU does not HAVE SSE or AVX, a CPU is cabaple of RUNNING SSE or AVX. As Hitman tried pointing out to you and you've skipped right over is that the instruction sets are exactly that a set of instructions. Either a CPU is capable of running those instructions or they are not. HOW Intel or AMD designs their CPU to run those instructions is compeletely different. What you've been trying to say is that, for example, the instruction Z= A + B + C / D, somehow gets changed depending on the CPU it's run on which is completely false. What the instruction sets have done is combine instructions into one instruction. So hypothetically the x86 instructions would be X = A + B, Y = X + C, Z = Y / D and takes 3 steps to come to the value for Z. Intel way back created SSE so that it would combine those 3 steps into ONE instruction, Z = A + B + C / D. How many registers etc. that either Intel or AMD design into the circuit to execute the instruction faster DOES NOT CHANGE THE INSTRUCTION.
Not even Intel itself argued that standpoint when they got caught. They tried to argue that "well, our compiler runs the instruction set for our CPU's and reverts to x86/x87 so as to maintain maximum compatibility" in other words they were saying "don't blame us....we don't know which CPU's are compatible with the instruction sets or not". And that "excuse" failed because Intel itself was the company who created the industry standard of the CPUID containing the "flags" to tell software which instruction sets the CPU is compatible with and which ones it is not compatible with.
 
I think we all need to step away from the computer for a few minutes and take deep breaths.
 
I think we all need to step away from the computer for a few minutes and take deep breaths.

Thid one last comment on this and I am dropping it. I tried to be fair and back away from it.

@ Archer -

To say that the instruction sets are the micro-architecture is like trying to say that the instructions that comes in the box with a model is the model itself. A CPU does not HAVE SSE or AVX, a CPU is cabaple of RUNNING SSE or AVX. As Hitman tried pointing out to you and you've skipped right over is that the instruction sets are exactly that a set of instructions. Either a CPU is capable of running those instructions or they are not. HOW Intel or AMD designs their CPU to run those instructions is compeletely different. What you've been trying to say is that, for example, the instruction Z= A + B + C / D, somehow gets changed depending on the CPU it's run on which is completely false. What the instruction sets have done is combine instructions into one instruction. So hypothetically the x86 instructions would be X = A + B, Y = X + C, Z = Y / D and takes 3 steps to come to the value for Z. Intel way back created SSE so that it would combine those 3 steps into ONE instruction, Z = A + B + C / D. How many registers etc. that either Intel or AMD design into the circuit to execute the instruction faster DOES NOT CHANGE THE INSTRUCTION.
Not even Intel itself argued that standpoint when they got caught. They tried to argue that "well, our compiler runs the instruction set for our CPU's and reverts to x86/x87 so as to maintain maximum compatibility" in other words they were saying "don't blame us....we don't know which CPU's are compatible with the instruction sets or not". And that "excuse" failed because Intel itself was the company who created the industry standard of the CPUID containing the "flags" to tell software which instruction sets the CPU is compatible with and which ones it is not compatible with.

Bubba drop it we are talking about two different things. I admitted bringing up things that were out of the scope of this thread but you continue to apply what I was talking about to your little mission.

I never said the instructions sets and micro architecture were the same and I do not appreciate you spreading that. The fact is the processors are not the same and nobody is going to argue against that. My issue is because they are not the same they will not run the code the same way. I got into actual design and backed down allowing you the chance to do the same. You again, after I backed out, continue.

Archer said:
Like I said we are talking about different things but as to your point I agree that Intel should not intentionally cripple code but I am not talking about that and that is the issue here. It is completely on me; I moved outside of the scope of this thread and for that I am sorry.
 
Last edited:
Keep em coming..

When you have a chance.. a summary of completed results in the first post would be most cool. :)

Bench / Result
Bench2 / Result

etc..
 
I'll work on that tonight.

Some have questioned MaxxMem so I just ran it. Confusing thing about that one, running it with all the regular ID's the results all appear to be within the normal deviations. But when I ran it under the Bubba Hotepp Vendor ID all the results were within normal deviation except latency which suddenly dropped. That has me scratching my head.
 
Back