• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Processor, memory may marry in future computers

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

DavidJa

Member
Joined
Feb 11, 2006
Location
Folding&home
On Tuesday, Intel showed off an 80-core processor at its developer forum taking place in San Francisco this week and one of the prominent features of the chip is that each core is connected directly to a 256KB memory chip through a technology called Through Silicon Vias, or TSV.

The memory wedded to the processor cores could constitute the entire memory needed for a computer, Intel CTO Justin Rattner told News.com in an interview during the Intel Developer Forum. TSV could be used in a variety of chips, not just the 80-core monster. As a result, computer makers, when building a system, would get their memory when they bought their processors from Intel. They would not have to obtain memory chips separately from other companies like they do now.

"You could buy it as a block," he said.
http://news.com.com/Processor,+memo...s/2100-1006-6120547.html?part=dht&tag=nl.e703



and more on the TSV stuff

http://www.trusi.com/throughsiliconvias.html
http://news.com.com/Intel+to+unveil+new+way+to+connect+chips/2100-1006_3-5596264.html?tag=nl
 
20MB of RAM is enough for a 80-core PC ???

Wow. I guess the hardware trend is going to go into reverse
 
Neur0mancer said:
20MB of RAM is enough for a 80-core PC ???

Wow. I guess the hardware trend is going to go into reverse

LOL i was just about to post the same, but i thought "no, my calculations must be way off, there is no way that they could be implying that 20mb is enough".
 
I'd like to see just how big a die is that has 80 cores WITH ram. I mean 2 cores with 4MB of cache is like, what, 1 sq. inch or so? But I guess this IS like early, early prototype stuff...so I'd imagine that when things come down to it, that 20MB would grow to probably 2-4GB (somehow)

EDIT: OK....I've read through those other links that DavidJA gave....I see the size of the core here


This really seems kind of exciting. To think of the change that will come for personal computing within only ~5 years. 80+ cores....integrated RAM for IMMENSE performance improvements (I would think anyway...I guess there could be downsides there?) ...the replacement of copper conducters on motherboard componenets, to be replaced with laser/optical communications. Which, again, would be a vast improvement in a few ways. Exciting stuff! :)

I wish I had a bar of plutonium so I could fire up my time machine and see these things now! :)
 
Last edited:
But what will this mean for the companies that currently make the memory for our systems now? (corsair,Geil,Cruscial) Not that Im not wayyy excited about this but, this seems M$ like....Just has me wondering if intel is pushing a bit much. But that cant be a bad thing?

This also makes me think of the AMD and Ati thing where the CPU and GPU are one core...
Damn... :eek: theres going to be alot of stuff going on in the next 5 years....^^^ Well said jivetrky
 
Last edited:
is it possible that at the speeds associated with this smaller amount of memory that it could work with less than were used to now?

sounds almost like a glorified cache for the cpu If they could work at those speeds maybe we wouldnt need so much physical memory (dont get me wrong would still think wed need more than 20mb just curious though)
 
I think it is an interesting idea and will be fun to see how it plays out. I am not sure if this is the way to go but hey, time will tell.

~jtjuska
 
Maybe they are leaning on storage speed catching up to current ram speed, partioning off some space for ram? then OS's and applications not loading much of anything into actuall system ram eventually.

Anyone thought about how your going to be able to overclock the ram and the CPU at the same time?

Sounds like R&D for the sake of R&D to me, will it go somewere? I say roll some dice.

btw.. what software is going to be able to take advantage of this many cores, and when?
 
Senater_Cache said:
20mb because they are mini cores. doing simple operations.
the idea here is super-multithreading/ simultaneous calculation followed by merging...think RAID.

Thats very possible.

But, if thats the case, Intel's CTO was full of crap; or only telling half the story, wich is much like being full of crap. Or by chance, he doesn't understand whats going on.
 
greenmaji said:
btw.. what software is going to be able to take advantage of this many cores, and when?


I think this may be part of this story also, to say that hardware manu's are telling the software ppl that this is the direction to go in, so hopefully future software is developed with multicore in mind. I think it's really stupid that software that's been written in the last year or so hasn't been written for multicore CPUs...I mean it is obvious to me that this is the direction we are going.
 
jivetrky said:
I think this may be part of this story also, to say that hardware manu's are telling the software ppl that this is the direction to go in, so hopefully future software is developed with multicore in mind. I think it's really stupid that software that's been written in the last year or so hasn't been written for multicore CPUs...I mean it is obvious to me that this is the direction we are going.

The only problem with this is that some applications do not lend themselves well to multi-threading at all.
If you search this board it seems every thread on this subject pulls up a software developer or a programer that has to lay out the details of why and it gets glossed over like they never said anything :rolleyes:
maybe some quotes would do.. :bang head
 
greenmaji said:
The only problem with this is that some applications do not lend themselves well to multi-threading at all.
If you search this board it seems every thread on this subject pulls up a software developer or a programer that has to lay out the details of why and it gets glossed over like they never said anything :rolleyes:
maybe some quotes would do.. :bang head


I have 0% knowledge of programming, but I don't understand, logically, how multiple cores can be anything but good, used properly of course. I understand that it would be necessary to keep certain threads together or whatever...but if you have 2 separate processes to do at once, why wouldn't 2 processors be the best way to do so? I mean, in an application that is totally linear (I have no examples to give) and wouldn't have multiple threads because it has absolutely no need for it. Well ok, perhaps MP3 players for instance (Just because it's playing right now so it came to mind). All they do is decode the file. There's no need for multiple threads. But then, perhaps, the visualizations or other "extras" could be done on a separate thread? now i know, of course, that MP3 decoding doesn't take but 1% of most processors made in the last 5 years, but it's just a quick example.

Seems to me like maybe it is just that the programmers need to learn how to better use multiple cores.

If you (or anyone) could find any quotes or other info on that, I'm quite interested!


EDIT: And also, if certain applications don't work well with multithreading.....they they can go on just using one of those 80 cores, right? If there's no need for it, then don't....but in many/most cases, multicore can only increase performance (When properly utilized I guess)
 
Programs multi threaded or not will be used by both cores regardless of what Task Manager tells you.

The program is executed by the OS..the OS definetly uses both CPUS to do its work.

The multithreaded programs are however more optimized.
 
jivetrky said:
I have 0% knowledge of programming, but I don't understand, logically, how multiple cores can be anything but good, used properly of course. I understand that it would be necessary to keep certain threads together or whatever...but if you have 2 separate processes to do at once, why wouldn't 2 processors be the best way to do so? I mean, in an application that is totally linear (I have no examples to give) and wouldn't have multiple threads because it has absolutely no need for it. Well ok, perhaps MP3 players for instance (Just because it's playing right now so it came to mind). All they do is decode the file. There's no need for multiple threads. But then, perhaps, the visualizations or other "extras" could be done on a separate thread? now i know, of course, that MP3 decoding doesn't take but 1% of most processors made in the last 5 years, but it's just a quick example.

Seems to me like maybe it is just that the programmers need to learn how to better use multiple cores.

If you (or anyone) could find any quotes or other info on that, I'm quite interested!


EDIT: And also, if certain applications don't work well with multithreading.....they they can go on just using one of those 80 cores, right? If there's no need for it, then don't....but in many/most cases, multicore can only increase performance (When properly utilized I guess)

http://www.ocforums.com/showthread.php?p=4224936

http://www.google.com/search?num=30...rallel+algorithms+Serial+Fraction&btnG=Search

http://www.cs.cf.ac.uk/Parallel/Year2/section7.html

efficiency decreases (effecent use of the number of processors or cores) with the lack of parallelism of the program being executed.
 
greenmaji said:
http://www.ocforums.com/showthread.php?p=4224936

http://www.google.com/search?num=30...rallel+algorithms+Serial+Fraction&btnG=Search

http://www.cs.cf.ac.uk/Parallel/Year2/section7.html

efficiency decreases (effecent use of the number of processors or cores) with the lack of parallelism of the program being executed.


Alot of that stuff was right over my head....but it seems to me that what it's talking about is running a single task (algorithm) on multiple cores. I can see how this would be limiting. But I'm thinking more along the lines of seperate calculations going to each core, not breaking up a single calculation over multiple cores.

I still think that the problem is just change. Programmers just have to "learn" to properly use the multiple cores to the best of the cores ability.
 
20mb seems suffecient to me in that kind of setup.

THink about what your ram does now, it loads stuff for the CPU to access quickly and they have to talk across a narrow bridge, if you cpu is fast enough to do everything you ask it to do when you ask it to do it you remove the need to have RAM and all those instruction sets to keep a program running. (Random Access Memory) for those that have forgotten what ram does. And ram -> CPU is SLOW hence why you need lots of it.


With 20mb of ram on a CPU with 80 cores with virtually no lag between ram and cpu's, they are trying to elliminate the bottle neck, and the ram becomes useless because the CPU just does everything its told when it is told to do it, the 20mb isnt really even ram, its more like an instant buffer like your hdd's have, because it may be possible to overload the 80 cores with instructions.. maybe....

Also if you start reading about parallel CPU's you see some amazing numbers, like windows xp pro running at 1.5Mhz with 512k of ram, thats not a joke either. i forget how many cores they were using but it was't 80 it was much lower.
:drool:

im not pretending to truly understand it, or even believe that its possible and economically viable, but it does pose some very interesting things to think about.
 
Last edited:
jivetrky said:
Alot of that stuff was right over my head....but it seems to me that what it's talking about is running a single task (algorithm) on multiple cores. I can see how this would be limiting. But I'm thinking more along the lines of seperate calculations going to each core, not breaking up a single calculation over multiple cores.

I still think that the problem is just change. Programmers just have to "learn" to properly use the multiple cores to the best of the cores ability.

The are taught how to proerly utilize multiple CPUs SMP is far from new, the thing is to be truly multi-threaded the major compontents of the application need to be broken down and equaly distributible across all cores and the links above describe why there is a point of diminishing returns in doing so.
 
Back