• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Where are CPUs going in the future?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Sliver

Member
Joined
Jul 1, 2004
With Intel scrapping the Pressler and moving over to Conroe it looks like it isn't MHz that's the priorety anymore, it's the IPC. I want some peoples input on if they think this is good or not. While I'm sure everyones kneejerk reaction is going to be "Yes, of course it's good!" I'm going to play devil's advocate, and (I DO think moving over the IPC is a good thing.) point out what's BAD about making IPC the priorety.

Firstly, with IPC being the priorety, the chips wont need to run at clockspeeds that are so high, so they'll need less power. The Pentium-Ms run near room temprature. Cool chips = Good thing.

But, there are some bad things. To improve the IPC they're going to need to add features/functions to the chip. And that's going to make the chips bigger. (This is where I start flying by the seat of my pants because I'm still hazy on the details of exactly what goes into making the CPUs. So if someone could fill in the blanks for me I'd apreciate it.) Bigger chips means fewer chips per wafer, so yeilds will matter more. This is the reason why a .8 micron Pentium and a .18 micron P4 are the EXACT SAME SIZE. Don't believe me? Go take a look at this. It's an interview with Bob Colwell, who was Intel's IA-32 Chief Architect from 1992-2000. (In fact, just watch it, it's a good vid.) And at almost the 3 minute marker in his power point presentation he shows a slide of all the CPUs that were made while he was working at Intel and how big each of them were. And despite the fact that there were shrinks in the silicon process, they were adding things to the chips, (like larger caches) so the new chips ended up the same size as the old ones they were replacing.

My point? That the chips are going to keep getting bigger, and making them is going to get more expensive. So the consumer is going to end up paying more for them.

Another problem. Their hands are both kind of tied when it comes to improving the IPC of a chip. There are probably a great number of things that you can take off other parts of the computer and cram them into the CPU directly, and speed things up. But the problem is, if Intel or AMD does that, that puts the people who are currently making that part are out of a job. And that also makes upgrades harder in the long run when an improvement can (or worse, needs to.) be made.

But these are the only things I can come up with, and I still consider them good trades in exchange for better performance, cooler chips, and better power consumption.

What do you guys think?
 
Last edited:
Sliver said:
With Intel scrapping the Pressler and moving over to Conroe it looks like it isn't MHz that's the priorety anymore, it's the IPC. I want some peoples input on if they think this is good or not. While I'm sure everyones kneejerk reaction is going to be "Yes, of course it's good!" I'm going to play devil's advocate, and (I DO think moving over the IPC is a good thing.) point out what's BAD about making IPC the priorety.

Firstly, with IPC being the priorety, the chips wont need to run at clockspeeds that are so high, so they'll need less power. The Pentium-Ms run near room temprature. Cool chips = Good thing.

But, there are some bad things. To improve the IPC they're going to need to add features/functions to the chip. And that's going to make the chips bigger. (This is where I start flying by the seat of my pants because I'm still hazy on the details of exactly what goes into making the CPUs. So if someone could fill in the blanks for me I'd apreciate it.) Bigger chips means fewer chips per wafer, so yeilds will matter more. This is the reason why a .8 micron Pentium and a .18 micron P4 are the EXACT SAME SIZE. Don't believe me? Go take a look at this. It's an interview with Bob Colwell, who was Intel's IA-32 Chief Architect from 1992-2000. (In fact, just watch it, it's a good vid.) And at almost the 3 minute marker in his power point presentation he shows a slide of all the CPUs that were made while he was working at Intel and how big each of them were. And despite the fact that there were shrinks in the silicon process, they were adding things to the chips, (like larger caches) so the new chips ended up the same size as the old ones they were replacing.

My point? That the chips are going to keep getting bigger, and making them is going to get more expensive. So the consumer is going to end up paying more for them.
The Athlon XP tended to have a "higher IPC" (I hate saying that, because IPC is really really variable) than the P4s, and also a much smaller die size than P4s based on similar process technology. I beleive the same is tryue of the new K8 cores and the new P4s.

Another problem. Their hands are both kind of tied when it comes to improving the IPC of a chip. There are probably a great number of things that you can take off other parts of the computer and cram them into the CPU directly, and speed things up. But the problem is, if Intel or AMD does that, that puts the people who are currently making that part are out of a job. And that also makes upgrades harder in the long run when an improvement can (or worse, needs to.) be made.
I agree with you here.
 
Actually that is not what cpu makers are going for, the reason Intel went for a more p-m like cpu imo is because it runs cool, and running cooler makes it easier to put multiple cores on a package. The future is in increasing the number of threads a cpu to do, so we are talking about multiple cores which in the beginning as you can see with the x2 and pentium d are more like two cores glued to each other, but in the future Intel plans to only include at least one or two cores like we have today and a couple more cores like 8 or more that are simplified much like the cell design the play station 3 will use.

I don't think the problem is the cpu cores getting bigger and more expensive, it is getting less performance for the money you pay. It has been a long time since dual core and multi threads have been around and little has been developed to take advantage of it.

Intel's plan for future cpus was displayed last year in it's spring developer's forum, and this year's forum is coming up so we'll see if they have changed their plans.
 
I think it's interesteing that the "MHz myth" is on a direct course to the exact opposite of it's original meaning. The buzz now is that MHz doesn't matter anymore. But of course it matters. As long as chips have clocks, MHz is the easiest way to scale performance.

Chipmakers can make designs fewer pipelines and more exec units, which (on paper) increases IPC but usually forces a lower clock speed. But once that design is done it is not easy to change. They can make small tweaks here and there to add a couple hundred MHz between steppings easily enough. But adding new units or pipe stages is a major undertaking. So MHz is still absolutely critical to performance, it just won't be the all important number used to sell chips.

Beyond that and in addition to multicore, expect to see more of what Intel calls the Ts (or what I call the tease). These are features they add to the chips to bring a very specific new functionality to the table. HT, EM64T, VT, AMT, etc. They of course are not useful to everyone, but the combination of the technologies makes chips more appealing. Do people who buy cars _need_ sunroofs, leather, heated seats, and shopping bag tie-downs? Of course not, but they either make people feel better about what they're buying or add a tiny amount of incremental, but unique, functionality. This is certainly not a new concept for chips, but it will be more prevalent in the next 10-15 years.

With that in mind, the questionmark is not really on the chips. Intel and AMD will continue to make more powerful chips--through new designs, faster speed, and multiple cores--and they won't jack up prices on the mainstream chips (they would love to, but they can't afford it, classic supply & demand comes in to play with $500 chips). The real gray area now is software. Will software devolopers retool their operations to deal with multiple cores and new chip functionality? Can they increase quality and expand features without sacrificing performance? I don't think anyone has an answer yet. There are a lot of really good new programming languages out there that make testing and multithreading much easier. But their adoption by game, driver, and A/V processing developers is slow.
 
I think financially, it would be cheaper for them to keep doing what they were doing with ramping up clockspeeds, or what I was hoping was happening with IPC, than just cramming more cores into the CPU. What the hell is the point of shrinking the process if you're just going to cram more cores into the processor? And even if yields remain consisten with each shrink (which they wont) that'll make the processor at the end STILL more expensive to make. I mean, it's grade school math. two cores per chip means twice the wafers they need to make them. Exactly how much smaller do the cores need to get before 8 cores per chip becomes feasable?

And then there's programming so each core gets used which is another can of worms entirely.

I was hoping that the dual cores thing was just something they did because they were backed against a wall. But if IPC doesn't become the priorety. If cramming more cores on is the new school of thought, then everyones screwed.
 
We are just now starting to see the adoption of multi-core desktop processors, and it appears that is the route we are heading down. As systems advance, I believe we will start to see more and more chips like the Cell processor, in fact I believe both Intel and AMD are putting research into similar designs slated for 2009-2012 releases. After that however, if we still stay on silicon, things are going to get wonky.
My best guess would be that after we get multi-core chips pretty much standardized, and before we move onto something beyond silicon, we are going to see some pretty wild assymetrical multicore chips. I would imagine the best (limited by my understanding of microchip design, and the current development of the field, both of which will probably be on a completely different tangent by the time we reach such a point) designs would feature a primary core(s) that would receive incoming signals, delegate them out to individual specialized cores on the basis of the type of operation, and bridge each core to each other with an ultrafast FPGA that could reprogram itself with preset or even adaptive settings depending on the nature of the task. Ultimately, I suspect that the large static caches will be replaced with FPGAs as the technology advances. However, this is probably at least another 5-10 years off, and the industry runs on inertia, so it will be a while before we see some major differences in the nature of microchips.
 
AMD and Intel's focus has always been to create chips that do work faster and cheaper, and that is what they will focus on till they turn blue in the face. They go for whatever is easier to make their chips faster. That is all they strive for, faster processors. Hz doesn't have much to do with speed for anything. The earth rotates at 1.57E-5 Hz, my monitor flickers at 85. For processors, it isn't about some "ipc" or "Mhz," it is about the amount of work done over a given period of time.

One thing that gets on my nerves is how Intel and AMD run around saying how great dual cores are and how it is the future as if that is what they want. They don't want to go down that path, but they have no choice. Multi-core is not the ideal solution, which would be to just make single cores faster (by faster I don't mean Mhz!!!).
 
The memory controller gave a massive boost. If the chipset people would cooperate with AMD (which they wont) they could cram the entire chipset into the CPU. Imagine how much faster it would be then. And as for chips becoming more like the Cell. Imagine how expensive those chips will be when they get up to 8 cores per chip. Now imagine how much heat they will produce and how much power they will consume. They're looking at the EXACT SAME KIND OF PROBLEMS THEY WERE WHEN THEY TRIED TO RAMP UP CLOCK SPEEDS.

I think, that the BEST thing they can do, that will be painful at first, but the best for EVERYONE in the long run, is to make IPC the priorety, and phase out x86 entirely. It's a long ways away, but from what I was able to glean from Colwell lecture is that x86 is to broken too fix and it's going to have to happen sooner or later.
 
Last edited:
Sliver said:
If the chipset people would cooperate with AMD (which they wont) they could cram the entire chipset into the CPU. Imagine how much faster it would be then.

It's possible, but the chips would have ~1500 pins and be considerable more difficult (read "expensive") to test. I doubt it would bring a huge performance gain. The latency of the southbridge is pretty insignificant compared to HDD or network latencies and access times. And it's not particularly important for stuff like audio, USB, firewire, etc.


and phase out x86 entirely. It's a long ways away, but from what I was able to glean from Colwell lecture is that x86 is to broken too fix and it's going to have to happen sooner or later.

Well, no modern chip implements x86 in hardware. They're all RISC-like and break down x86 macroops into internal microops. Sure it would be faster if you could write code in microops, but those are very specific to each chip design and change somewhat often. x86 is really more of a software limitation than hardware.
 
Sliver said:
There are probably a great number of things that you can take off other parts of the computer and cram them into the CPU directly, and speed things up. But the problem is, if Intel or AMD does that, that puts the people who are currently making that part are out of a job. And that also makes upgrades harder in the long run when an improvement can (or worse, needs to.) be made.

I don't see upgrades (in the end user sense) as much of a problem, people have bitched about the Athlon 64s lack of support for DDR2, but it hasn't really been a problem. The next component mooted to be moved on die by AMD is the pci-e controller, and that doesn't appear to be much of a problem either. Considering that expansion bus standards tend to endure for a long time and providing that it allows enough lanes, the processors sporting a pci-e controller will probably be obsolete before pci-e is. As far as design faults necessitating improvements I agree with you, but this is equally problematic for any more complicated processor design, not just those integrating motherboard chipset components.

Sliver said:
The memory controller gave a massive boost. If the chipset people would cooperate with AMD (which they wont) they could cram the entire chipset into the CPU. Imagine how much faster it would be then. And as for chips becoming more like the Cell. Imagine how expensive those chips will be when they get up to 8 cores per chip. Now imagine how much heat they will produce and how much power they will consume. They're looking at the EXACT SAME KIND OF PROBLEMS THEY WERE WHEN THEY TRIED TO RAMP UP CLOCK SPEEDS.

I think that the CPU manufacturers are desparate to produce ever more powerful chips (even if there isn't software that takes advantage of this)
and right now it seems that intel has hit the speed ceiling with netburst, and AMD nearly has with hammer. Extra cores are a way of making chips more powerful, and it seems that right now it is pretty much the only way. As far as single die designs go there will come a point where either (at any given process size) it simply wont be economic to make a die with more than a certain number of cores, or that (as you said) it becomes too difficult to cool. However they haven't reached that point yet, and no doubt in the future process improvements will lead to higher clocked designs, and maybe the marketing emphasis will switch back to pushing clock speed as important.

Sliver said:
I think, that the BEST thing they can do, that will be painful at first, but the best for EVERYONE in the long run, is to make IPC the priorety, and phase out x86 entirely. It's a long ways away, but from what I was able to glean from Colwell lecture is that x86 is to broken too fix and it's going to have to happen sooner or later.

This was what ed was saying about 9 months ago, and he was citing Montecitio Intaniums as a possible and sensible basis for future desktop processors. But then intel dropped hardware x86 support from Montecitio, for a not very good pure software solution. The ability to run x86 apps at decent speed would be important, and there would be a transition period of years if a new architecture was brought into desktop machines. Then intel announced more details of it's next generation micro architecture, and Conroe looks like a very promising chip. Itanium is also notoriously difficult to program for (not that I have any personal experience). x86 may be broken, but IMHO 2 things stand in the way of it's replacement, the lack of a viable alternative architecture, and the fact that difficulties in programming and lack of experience could reduce software performance on a new architecture so significantly that it might negate any gains in raw performance.
 
NookieN said:
no modern chip implements x86 in hardware. They're all RISC-like and break down x86 macroops into internal microops. Sure it would be faster if you could write code in microops, but those are very specific to each chip design and change somewhat often. x86 is really more of a software limitation than hardware.
Elaborate.
 
Wow most of this talk is over my head and I usually consider myself a pretty intense hardware geek. Someone find me a book about microprocessor architecture!!!

My $.02 - What I can tell is what has already been stated - we're meeting a mHz ceiling and the CPU makers are unable to ramp up their speeds a whole lot more. The fastest AMD's have stuck around 2.6ghz and the fastest Intels have stuck around just under 4ghz for quite a while now. They've really got 2 options:
  • Improve processor architecture with shrinks and new features
  • Stuff more cores onto chips

The 2nd option is what everyone's going for right now, even though we KNOW it's not the be-all-end-all solution to CPU's. We're implementing hardware changes without letting the software programmers write software to take advantage of it. Look at the Athlon 64 - Yeah, it's 64-bits but it's not twice as fast as a 32-bit processor. Just like how a dual-core CPU isn't twice as fast as a respective single-core.
 
Sliver said:
Elaborate.
This is actually done not just by modern x86 processors (pentium 3/4, athlon, etc) but also by the PPC-970 (aka G5) with the PPC instructions set.

Modern x86 processors have a sort of a front-end and a back-end. THe frnet end recieves x86 machine code. Since x86 is a CISC architecture, some of the instructions are fairly complex. CISC went out of fashion a while ago, and for good reason. If you put every sngle complicated thing in hardware, you end up with a big, complicated chip. Large die size makes it expensive, and complexity makes it hard to squeeze clock speed out of it. So modern x86 chips don't have an execution unit for each type of instruction. They ususally have only 3 types of pipelines, ALUs (for integer operations), AGUs (for memory-related operations) and FPUs (for floating point math). The front end has an instruction decioder that takes the x86 instructions, and breaks each one up into "micro-ops." These micro-ops are then sent to the exection units in the backend. For a discussion of how this works in the PPC-970 (and a bit of how it works in the P4), check out this article:
http://arstechnica.com/articles/paedia/cpu/ppc970.ars/4
 
kiyoshilionz said:
We're implementing hardware changes without letting the software programmers write software to take advantage of it. Look at the Athlon 64 - Yeah, it's 64-bits but it's not twice as fast as a 32-bit processor. Just like how a dual-core CPU isn't twice as fast as a respective single-core.

Dual-core at least lets you speed up some tasks by 50-75%. Some algorithms of course cannot be parallelized, but given time software will become increasingly threaded. 64-bit really brings zero performance gain to the vast majority of users. The main advantage, of course, is that a 64-bit chip can directly address more than 4GB of memory. The x86-64 instruction set adds more general purpose registers and that's what helps most of the 64-bit apps. But 64-bit really never had anything to do with speed or power. 32-bit processors have been able to work with 64 (or 80, or 128) bit data for over a decade.
 
I'd say Sun is leading the way to where processors will shortly be with the T1. Ofcourse by the time Intel or Amd get that far the architechture will differ largely, but the point will still be running massive amounts of threads on one processor. Programming will also shift to that direction, and I would imagine soon a majority of code will be threaded (even in places where the gains are fairly small).
 
NookieN said:
Dual-core at least lets you speed up some tasks by 50-75%. Some algorithms of course cannot be parallelized, but given time software will become increasingly threaded. 64-bit really brings zero performance gain to the vast majority of users. The main advantage, of course, is that a 64-bit chip can directly address more than 4GB of memory. The x86-64 instruction set adds more general purpose registers and that's what helps most of the 64-bit apps. But 64-bit really never had anything to do with speed or power. 32-bit processors have been able to work with 64 (or 80, or 128) bit data for over a decade.
Yes, and multithreading is going to, like everything else, give diminishing returns. Lets give a generous estimate of 50% improvement in games. a third core is only going to improve 25%, a fourth core is only going to add another 12.5%, and so on and so forth. And the more threads there are, the more performance you lose as the CPUs have to talk between eachother to get the job done. Now if a game programmer could write their games to detect the CPU count and scale acordingly (be it one core in a desktop, or 32 in a massive server) that would be impressive. But considering that the software people are only thinking 6 months to a year ahead, I'm doubting that's going to happen.
 
What you are stating is basically an interpretation of Amdahl's Law. Every algorithm has limits to how much it can be parallelized.

I don't think the implications of that are too bad for PCs. It's worse news for Sony, since they're pinning their hopes on multicores in the PS3. PC CPU speeds do continue to increase (slowly), but with a console you're pretty much stuck until the next generation.

Today most games are practically limited by video, and heavy parallization there is already common. For the parts that are CPU limited, game developers will probably add increased parallelism when the time comes. I think a growing portion of the gaming community would argue though that the industry has to tackle some of its creativity problems before the threading issue.
 
I think the PS3 is going to show the world exactly how well parallelism can work for video games.

And another problem, as has been said time and time again, is multithreading is not easy. making IPC the priorety probably wont give as big a boost as another core, but you can be certain the programming comunity is going to thank you for it.
 
Back