• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

GHz vs Core count

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
It was moeg2 rendering in vegas and it isn't perfect to begin with, not all cores are loaded like you would see in handbrake, that might have been the culprit..
 
Alright so I'll give you guys an inside scoop on what is going to happen within the next few generations with CPUs, and talk about why we are a bit stuck. There are a few companies that will continue to push physical core count and increase the number of threads within a core in order to meet "marketing demands". Whether or not this is true or just a means to sell will depend on what we have when we get there. The speed of the cores will largely not change, and may in fact decrease depending on the CPU architecture.

So why are we here and how did we get here?

First the why, we are at this point because single core CPUs could not be exploited any further back in the P4 days. AMD and Intel both sat for a bit and thought about what they could do. Deeper pipelining on the core, increase speeds, and multi-core were all on the table. In the end AMD won with a reduced pipeline (thanks to 64-bit), and multi-core on chip. Intel followed up by dominating that market until (arguably) present day. At the time when hardware transitioned to multi-core, software was being bottlenecked by the hardware and needed room to grow. These days, its the other way around. Software bottlenecks the hardware because we didn't really take the time to help define how software was going to work in multi-trheading. We do well with it, but there still isn't a clean native way to multi-thread. You have to call the cores/threads and do process management and the like. Windows 10 and newer kernals on Unix/Linux helps handle the process calls so that programs don't all end up calling Core 0 all the time, but again there is no clean way of going about it. And really, do we need it? At this point we have to look at the system architecture and decide if this is the best we can do. Is the x86 architecture that amazing? Can we really exploit that system architecture any further? What about RISC-V and PowerPc type architectures? What about removing HDDs and Memory altogether and going with a uniform based memory that can even be abstracted as dynamically configurable L3 Cache for the CPU? Should the CPU handle all the tasks or should we have special processors (more than GPUs) to handle other tasks, and do they have their own software agents so that its basically two systems in one?

There are a lot of questions that need to be researched and discovered. However computer industry is running full steam ahead and companies like Intel, AMD, Dell, HP, SuperMicro, Asus, Gigabyte, etc are all running full steam as well. No one wants to talk about the end of x86 because that's nearly a trillion dollar industry that can't be interrupted by our sensitive economic system. There are a lot of moving pieces here that eventually need a nuke thrown in to break away from tradition.

Now what about speed? I talked a bit about architecture and cores, but how come we are not seeing speeds increase? This is actually do to physical issues as we have now gone full quantum in our CPUs. I don't mean we have quantum CPUs, I mean that we are working within quantum state. You see, its not the fact that FETs are getting so small that current leaks through the materials causing stabilization issues. Its not the fact that we have to use super powerful lasers called EUV to pattern our transistors. It all comes down to power. We can easily push more power into a CPU, but we cannot mitigate the heat or carry the massive current spikes that are required by large switching events. As we have continued to decrease the FET size, we have also been doing the same to the wires of copper that bring the voltage and current to the CPU. CPUs are designed on layers, the transistors always being on the bottom, and wires being above them. Wires are used to connect up the transistors and create the logic, they are also used to provide the voltage for the transistors. These wires have gotten so thin that their resistance increases rapidly as heat increases. Increased resistance means Vdroop, Vdroop leads to transistors not switching all the way from On-->Off, or vise versa. If they get stuck in between the state becomes undertiministic and that typically gets you a stall or a system reboot. Well, why not bundle the wires than? This causes heat trapping issues. If we add more than the thickness increases, and than we still run into the issue of how do we get the heat out? This is the biggest puzzle to our Silicon development at the moment. Everything else is secondary in my opinion before anything else can happen, even deciding if x86 is the right architecture. You can go out and read several papers on transistors being able to switch at 5-100GHz or even greater at room temperature. However, those are built in lab with with very few layers, and under controlled switch loading states. CPUs, and most other microchips, run in a chaotic environments. Meaning that its undeterministic to define what kind of load will be generated at any given time within the system.

So in the end, we wait. We wait for material scientists to figure out a way to bring carbon/graphen nanotubes to Silicon devices, and we wait for EUV to become sustainable so that we can get to sub 5-nm development, and we wait for better internal system agents (that lovely little linux box that is in every Intel and AMD chip) to be able to handle power, and thermal. We wait and see.
 
Yes I think we all know this. AMD is stuck at 4 GHz and Intel is stuck at 5 GHz and the only way to sell something new is to add cores and threads. Do we need them ...................... no. Do we want them ................... sure.

So I built a bunch of Ryzens, OC'd several to 4 GHz + or -, got bored and sold them all. Now I've built a bunch of Coffee Lakes, even delidded one, and got a couple to 5 GHz + or -, am now bored and might sell them off. So what could I build next? Gotta be an AMD Threadripper with way more cores than I could ever need. Already built some Intel dual E5-2670 workstations with 16c/32t just to play which I've sold off so why not.

Some guys can just build a system, run it for years, and that's that. Can't do it, not gonna do it, must keep building, selling building, etc. Wash, rinse, repeat on and on and on ............ :screwy:
 
@DaveB

Look into RiscV boards that will be coming out. Want to advance? overclock those guys :)
 
If the current speed wall lasts long enough we may start getting more software actually coded for more cores. Poor AMD frequently seems just a little too far ahead of its time. 64 bit? Nice! Especially with quad core! By the time software was catching up Intel was on a roll and then Bullsnoozer. Now AMD has some breathing room and may well bring speeds up before the Next Big Thing. Could be a mini Golden Age for us with competitive chips from both houses and better software to take better advantage of it. Either hardware or software always seem to be playing catch up. Maybe the Next Big Thing will be software truly optimized for the available hardware.

Or AMD's advantageous SMT utilization will get good code from most sources a week before Intel announces a 40 GHz single core that screws up everything. LOL

Dolk, what do you think about multiple, low power, multi cores SOCs on motherboards and everything moving to the "cloud"?
 
Part of the problem as I see it is that for the masses, we're already well past "good enough". Who is driving the need for more performance? The approach still seems to be adding instructions groups for new features that can be sped up.

For my personal interests, I'm watching carefully what Intel does with AVX-512. See if rumours come true that it will be rolled out to mainstream processors, and in what form (one unit or two?). My worst case fear is they'll limit it to HEDT/Xeon lines, which will get rather spendy.

To me it feels like AMD have pretty much given up on FP64 performance across both their CPUs and GPUs. Disappointing if understandable, in that the masses don't have much need for it.
 
@Alaric, you missed my point. Software has 0 control over the total frequency output of the CPU. It has to do with physics at this point. The Architecture inside is not going to change. AMD failed on trying to go physical core only, and Intel won on going Thread+Core. This part is due to software utilization, but it has nothing to do with how fast a core can run.

Multi-core SoC is a trick in cost reduction. AMD only designs their CPU cores now, everything else is done by the Chinese or other companies. In fact, their current memory unit is an IP that they can't even look into. Its like drop in ARM but worse. And those tricks still have the same issues that Intel faces. The physical aspect of Silicon and Copper within our CPUs have reached a new limit in the manufacturing and processing world. We NEED new materials such as graphene nano-tubes to be compatible with current silicon development processes before we can advance.

All we are going to see for now on is tricks. Tricks to enable software to do more. Somehow that has lead us to a very stupid road which is the cloud. Put everything into the Cloud so that we don't have to waste power else where.
 
I have a feeling it's getting cloudy. Fun overclocking days will soon be over. Doom. All you get is video signal and send commands what you need to get done?
 
Sadly that is where the industry as a whole is heading. There is a lot of talk of PCs going cloud and all you get is a terminal. There is still a lot of money in the PC business though, so it won't go away entirely. But that doesn't mean it won't be cheap in the future either.
 
There might be a silver lining to cloud... for cloud to really take off, we need great quality, always available network connections. I think there is a lot of improvement needed for that to happen.
 
@DaveB

Look into RiscV boards that will be coming out. Want to advance? overclock those guys :)

RISC is a blast from the past for me. Where I was employed prior to my retirement, we designed, built, and fielded a system for the IRS at 5 sites back in 1993-1995 and ran and updated it up until 2015. The second server evolution was HP PA-RISC servers, probably back in 1999 or so. We hosted the back-end processing on it running under HP-UX. The front-end applications for the IRS employees ran under Windows 3.1, then NT, XP and finally Windows 7. In subsequent refreshes, the PA-RISC servers were headed for EOL and were replaced with Itanium servers so the customer could keep everything on HP-UX. The last iteration moved to Itanium blade servers.

Looks like another cluster-F headed nowhere on the face of it. A lot of independent groups and none of them have passed the in-development RISC-V compliance suite. I've dealt with a few compliance efforts with multiple vendors and they never work out. Everyone thinks they're smarter than everyone else and ends up going their own way. Kind of like the UNIX/LINUX fiasco of the past 40 years. I guess we could always hope for the best.
 
@Alaric, you missed my point.

I think it was mutual. LOL
If the current speed wall lasts long enough we may start getting more software actually coded for more cores.

I just meant real performance increases (if any) may have to come from software for a while, as the consensus here seems to be hardware advancements are grinding to a halt.

Dolk, what do you think about multiple, low power, multi cores SOCs on motherboards and everything moving to the "cloud"?
More of the same. Just speculating on how the industry may try to keep bumping perceived performance for the end user with the current hardware wall. If all we end up with are terminals OCF will have a forum for overclocking displays and knocking down response time, I'm sure. "I only paid $8400 for a 3 ms monitor, then got it down to 2.1 ms with a chiller!"

Or AMD's advantageous SMT utilization will get good code from most sources a week before Intel announces a 40 GHz single core that screws up everything.
This was just a reference to Intel's uncanny ability to watch AMD predict the future, then step in with a dominant product when said future arrives. It looks like AMD laid the groundwork a couple times (quad core, 64 bit) while Intel quietly geared up The Machine until AMD's ideas were about to go mainstream. Then Team Blue's superior capital allowed them to hit the ground running while AMD wasn't able to finally capitalize on their own breakthroughs.

One post went a few directions, so I get that it wasn't as cogent as intended. I'm in the deep end of the intellectual pool here, without my floatie. Don't mean to splash anyone with dumb. :D
 
Sadly that is where the industry as a whole is heading. There is a lot of talk of PCs going cloud and all you get is a terminal. There is still a lot of money in the PC business though, so it won't go away entirely. But that doesn't mean it won't be cheap in the future either.

How many countries have such bandwidth available? Don't want cloud taking over honestly, i want my storage and computer physically next to me not in some data center @ Facebook hq.
 
So for the cloud thing to happen, you kinda just need to be able to sell it now. Its happening a lot more than people are lead to believe within their games these days. DICE uses it a lot with their games, ever wonder why a 980GTX can run BF1/BattleFrontII at pristine graphics? It's because they do some of the processing on their servers first before it is sent to you. They even talk about how that was a big deal in BF1.

We definitely have the technology and the power to do full cloud based services for gaming and the like. Really you just need the market to get there. I really don't know why it hasn't been brought to the market yet, maybe a lot are waiting for 5G celluar to take over. (Tangent topic, your home internet cable/dsl/etc will most likely be replaced with a 5G hotspot/router/something as it does have the bandwidth but not the latency for a high market of computer users). With that you'd get your W10 everywhere, your Google Now all the time, and your connected Apple on every device, even your terminal laptop running the light-weight OS. All very possible.

Please remember I do speak in speculative as I do follow technology trends and am part of it, but I don't know everything :)
 
With multiple cores we are limited by Amdahl's law.
Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours using a single processor core, and a particular part of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (p = 0.95) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence, the theoretical speedup is limited to at most 20 times (1/(1 − p) = 20). For this reason, parallel computing with many processors is useful only for highly parallelizable programs. https://en.wikipedia.org/wiki/Amdahl's_law

High processor speed will loose to multiple cores if it is highly parallelized work or high processor speed will win to multiple cores if it is not highly parallelized work.

The real problem is memory speed progression has been very slow and that is why there is so much use of branch prediction and prefetch in the X86. If we had a lot faster memory the processor would calculate numbers faster than prime95 small FFT. Processor speeds are great now and the real problems for increasing the speeds are heat with the material we have now, Carbon nanotube are stable up to 750 °C . Software compiling for Intel and AMD is done well now and is not a problem, it is the lack of hardware materiel progress.
 
Last edited:
With multiple cores we are limited by Amdahl's law.

High processor speed will loose to multiple cores if it is highly parallelized work or high processor speed will win to multiple cores if it is not highly parallelized work.

The real problem is memory speed progression has been very slow and that is why there is so much use of branch prediction and prefetch in the X86. If we had a lot faster memory the processor would calculate numbers faster than prime95 small FFT. Processor speeds are great now and the real problems for increasing the speeds are heat with the material we have now, Carbon nanotube are stable up to 750 °C . Software compiling for Intel and AMD is done well now and is not a problem, it is the lack of hardware materiel progress.

Dolk's Law states that for every law created in computers, its logical sense can be applied in all other fields. --Dolk 9/99/2099

I feel like this was just a randomly strung together paragraph that was suppose to be a bigger post. Amdahl's law can have it's logic be used in other sectors of computing. If you were to take the same logic and say, increase the front end or memory the same result is applied. We require the rest of the system to catch up before the sector we are looking at, can be sped up further. Memory has scaled well, but the cost of implementation has not really changed. It still takes a lot of area to implement, but it suffers the same issues as the rest of the CPU: temperature and working within sudo-quantum environment. You are right to say that Carbon Nanotubes can be stable in extreme temperatures, but so can Silicon. You can use silicon in a lot of different ways if you adjusted parts to its development process. Integrating Carbon/Graphene based nano-tubes has been a huge problem. Their destabilization temperature point is roughly the same as the required temperature needed to pattern silicon. Basically, as you are laying down your silicon layers, your nano-tubes have become a mucky mess. This is a big sector that is being worked on, but I'm not aware of any landslide findings that will allow for production.
 
Back