- Joined
- Jun 19, 2011
I don't think I understand. All you have to do in order to run multiple instances of super pi at the same time... is to open the program as many times as you want. Unless you set the priority yourself, they would all be assigned to core 0 and your times would be horrible.
Now if you're talking about benchmarks that use all available core/threads, you'll want to see Wprime.
If you're talking about assigning two cores to do the work of one in Super pi, then we discussed this a few pages back, and how it's physically limited by the hardware. It's physically impossible (as of now) to split the instructions such that two cores could do the work of one at the same time. While that would be the absolute best, right now we are stuck with using a law which Dolk brought up: http://en.wikipedia.org/wiki/Amdahl's_law.
Right now, multithreaded work doesn't end until the slowest thread finishes. So if you have four threads, 1, 2, 3, and 4. 1 finishes in 25s, 2 finishes in 35s, 3 finishes in 20s and 4 finishes in 30s, the time to complete this whole task will be 35s since the other threads physically can't go back and help the other. When we can start doing that, the times will drop insanely.
Yeah... thats a law all scientist hates. Its a legit one at that. The way cores are designed they simply read out the voltages that has been taken in by motherboard and bios I/O and simply calculate everything. If the voltages were to somehow cross you risk shorting out the CPU or simply ending process then and there.
If the software was smart enough to learn when the other cores have finished work it would subdivide the remaining work on the final core to the others for parallel processing. The flaw is, it is possible to actually extend the time because of that hardware limitation of the slowest core always ending the process, the division will continue to happen till you either hit the return of zero, or your computer crashes from the infinite loop. So it could drag on to ages which hopefully the software is designed to terminate such a process before it kills the system. You still have that hardware limit...
It would be an incredible advance in computer science for someone to destroy that limit.