Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!
Maybe the Next Big Thing will be software truly optimized for the available hardware.
The approach still seems to be adding instructions groups for new features that can be sped up.
@DaveB
Look into RiscV boards that will be coming out. Want to advance? overclock those guys
@Alaric, you missed my point.
If the current speed wall lasts long enough we may start getting more software actually coded for more cores.
More of the same. Just speculating on how the industry may try to keep bumping perceived performance for the end user with the current hardware wall. If all we end up with are terminals OCF will have a forum for overclocking displays and knocking down response time, I'm sure. "I only paid $8400 for a 3 ms monitor, then got it down to 2.1 ms with a chiller!"Dolk, what do you think about multiple, low power, multi cores SOCs on motherboards and everything moving to the "cloud"?
This was just a reference to Intel's uncanny ability to watch AMD predict the future, then step in with a dominant product when said future arrives. It looks like AMD laid the groundwork a couple times (quad core, 64 bit) while Intel quietly geared up The Machine until AMD's ideas were about to go mainstream. Then Team Blue's superior capital allowed them to hit the ground running while AMD wasn't able to finally capitalize on their own breakthroughs.Or AMD's advantageous SMT utilization will get good code from most sources a week before Intel announces a 40 GHz single core that screws up everything.
Sadly that is where the industry as a whole is heading. There is a lot of talk of PCs going cloud and all you get is a terminal. There is still a lot of money in the PC business though, so it won't go away entirely. But that doesn't mean it won't be cheap in the future either.
Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours using a single processor core, and a particular part of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (p = 0.95) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence, the theoretical speedup is limited to at most 20 times (1/(1 − p) = 20). For this reason, parallel computing with many processors is useful only for highly parallelizable programs. https://en.wikipedia.org/wiki/Amdahl's_law
With multiple cores we are limited by Amdahl's law.
High processor speed will loose to multiple cores if it is highly parallelized work or high processor speed will win to multiple cores if it is not highly parallelized work.
The real problem is memory speed progression has been very slow and that is why there is so much use of branch prediction and prefetch in the X86. If we had a lot faster memory the processor would calculate numbers faster than prime95 small FFT. Processor speeds are great now and the real problems for increasing the speeds are heat with the material we have now, Carbon nanotube are stable up to 750 °C . Software compiling for Intel and AMD is done well now and is not a problem, it is the lack of hardware materiel progress.