• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Is there a single metric I can use to compare CPU performance.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

harryboy

New Member
Joined
Dec 22, 2013
Hi, I am building a windows tool in C# that checks a users PC hardware and tells a user if their PC is capable of running certain software or not (Our in house proprietary program).

I want to compare CPU performance. I can obtain all information about the CPU (e.g. number of cores. clock speed, L2/L3 cache etc).

Is there a metric that I can use to judge if one CPU has better performance than another one.

For example can I simply multiply the number of cores by the clock speed and use that result as a metric for comparison.
Do I need to include cache size as well in any calculation???

I want to have a calculation that is used to give a ranking to CPUs and if a certain CPU is below a certain number (from my calculation) then I can indicate that it is below performance.

Thanks for any help on this.
 
The comparisons that you imply you want are usually made on software running on a CPU. There are different benchmarks for different effects that a CPU may display, but ultimately the end game is x/time. With all the variables that motherboards, cache, BIOS, other PC components, viability of the software, there's no one-size-fits-all comparison test. But some come closer than others.

Take a look at some of the info gathered already, go from there.

http://www.cpubenchmark.net/

http://hwbot.org/
 
Harryboy, please only create one thread per topic. I removed the duplicate thread for you.
 
Thanks for your quick reply Robert17. I have looked at the links you listed thanks. What I need is a general rule that I can go by for the purposes of a test, I know it will not be a catch all test for every application but just a general rule of thumb.
Is simply multiplying
clock speed * no of cores * Cache
wrong for example???
 
It wouldn't be so simple, sadly. More variables come into play, architecture, if you have a core 2 quad and a Haswell running at the same speed, they wouldn't be equal. You'd have to test them all (at least that would be my my best guess) or use existing databases like Robert listed earlier.
 
Thanks for your quick reply Robert17. I have looked at the links you listed thanks. What I need is a general rule that I can go by for the purposes of a test, I know it will not be a catch all test for every application but just a general rule of thumb.
Is simply multiplying
clock speed * no of cores * Cache
wrong for example???
really wrong. It isn't remotely that easy.. as was said, waaaay too many variables involved.

Christ.. slightly old thread here, lol. Ok hasn't been back since his last reply really.
 
i get this is a old thread but a good starting point. is looking at Core 2 cpus vs amd cpus of the same time. core 2 being more effeicent per clock based on the amount of L2 they had and the length of the pipe line. if you want an interesting fun old fact, when core 2 replaced intels P4 line. it would take a P4 running 4ghz to match a Core 2 running at 2.4ghz. being P4 had a really long pipe line to reach higher clock speeds but the way the pipeline was designed vs core 2 it was really in-efficent at doing work per clock.

cache does have a role to place but looking at L3, it doesnt really seem to play a role. L2 and L1 play a bigger role but intel has choosen to go with lower L2 vs Core 2 with its I series. if you played the superpi 1m game, high clock speed plus Large L2 meant the fastest time. though im willing to be with the higher clock speeds of what current i's are doing they have made large l2 null.
 
I really wish that you guys who know what's what inside a processor would keep this thread going.
I know very little about what is actually going on under the lids of these things.
 
well im not as active as i use to be and those that do know tend to stay quite. main reason being, it brings out the fan boys from both sides. those that stick to facts dont want to get involed in the mix. needless to say though, if you have question or something, i will do my best to answer it. most of what i know is from first gen i's and older. guess you could say im more a core 2 kind of guy on the knowlegde side.
 
I clicked on this thread simply to see how many different ways people came up with of answering 'no'. :)

It's really not possible with any degree of accuracy, imo. To take a simple and obvious example, AMD Bulldozer onwards CPUs have fewer floating point units per core than equivalent Intel chips (unless there are exceptions I'm not aware of, which is possible). Whilst this is actually a good idea for a number of reasons and wont affect most use cases, if the OP's software was very maths-intensive, then it might. And this is just a blunt instrument approach.

Another example, I upgraded my CPU from an 1100t to an FX-8350. In terms of performance one would think the gain would be quite little. And it is for the most part, but the new chip has hardware-accelerated AES meaning read and write times to my encrypted SSD are an order of magnitude faster (maybe more).

However, having now read the OP's post and found they're testing for a specific piece of software, the answer is probably quite obvious - write a dummy version of the software as a test-suite. The potential customer runs the "Test if your PC meets the requirements for this software", have it run through some test cases and see if they complete in X seconds.

The OP ought to have almost everything they need to build such a test program already as they're trying to test for a specific piece of software already written.

It's a piece of work, but if this matters to the OP, then that's the way to do it.
 
I clicked on this thread simply to see how many different ways people came up with of answering 'no'. :)

It's really not possible with any degree of accuracy, imo. To take a simple and obvious example, AMD Bulldozer onwards CPUs have fewer floating point units per core than equivalent Intel chips (unless there are exceptions I'm not aware of, which is possible). Whilst this is actually a good idea for a number of reasons and wont affect most use cases, if the OP's software was very maths-intensive, then it might. And this is just a blunt instrument approach.

Another example, I upgraded my CPU from an 1100t to an FX-8350. In terms of performance one would think the gain would be quite little. And it is for the most part, but the new chip has hardware-accelerated AES meaning read and write times to my encrypted SSD are an order of magnitude faster (maybe more).

However, having now read the OP's post and found they're testing for a specific piece of software, the answer is probably quite obvious - write a dummy version of the software as a test-suite. The potential customer runs the "Test if your PC meets the requirements for this software", have it run through some test cases and see if they complete in X seconds.

The OP ought to have almost everything they need to build such a test program already as they're trying to test for a specific piece of software already written.

It's a piece of work, but if this matters to the OP, then that's the way to do it.
well to add onto this, if making said program. he would need all kinds of hw and i would say run sis softsandra and use the ALU and FPU score.

another thought might be find the amd cpu min that would be needed be it athlon of a 939 and anything that meets that or above it would be a yes. like wise for intel, the program could just come back and say yes or no.

though i think using the ALU and FPU score from sandra would be a safer bet. there are some setups for comparsion in there but the problem is, is the countless number of cpus intel and amd to test. might just pool the people here and at other forums to run their cpus at stock speed. that is assuming he/she could find all the models to put into the list for the program to cross check.

really just would like to know what kind/type of program we are talking about. it would be easier to at least give a speed be it amd or intel that wouldnt be able to handle it.
 
well to add onto this, if making said program. he would need all kinds of hw and i would say run sis softsandra and use the ALU and FPU score.

I think you misunderstand. If I write a program that must do operations on inputs X, Y and Z (lets say it's a scientific program and these are real world values it has to crunch) and I want to know if someone's hardware is up to running my software, I create a dummy version of the software which does some of the same operations but on fixed values, say 1,2 and 3. That's all the program does, and then measures how long it takes to run with the dummy inputs. I just choose an appropriate value it must complete within. If it doesn't complete the dummy operations in X seconds, I know that with the real version of the software it would be too slow.

No external software or knowledge of CPUs or other hardware is necessary. It just needs the same routines built into it that the final software does which the OP has already written. As part of the install process or as a separate program if it needs to be done before purchase, the software checks hardware requirements.
 
I really wish that you guys who know what's what inside a processor would keep this thread going.
I know very little about what is actually going on under the lids of these things.

I learned a lot from lurking on the beyond 3d forums.
http://forum.beyond3d.com/

Thanks to RGone for pointing me in the right direction a while back!
This thread is the one I started reading in, and between that and quick checks to wikipedia, it was a Godsend of information.

http://forum.beyond3d.com/showthread.php?t=54018
 
Back