• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

CPUs eliminating discrete graphics cards?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I just like the fact that Intel is scaring Nvidia a bit. As soon as Intel was talking up the new possibilities of all CPU graphics, Nvidia came out and announced their next-gen mobile cpu/gpu chip to fight with larabee and atom.

Nvidia talking to VIA about their new Isiah processor (similiar to atom) and integrating with them also has some merit. I wouldn't be surprised if Nvidia buys that company in the future if Intel keeps up the pressure in this area.

Well to be honest, I doubt Nvidia is scared, but it will definately keep them on their toes and possibly drive them to make better products and make them faster.

Don't count IBM out of this either, they have released some amazing CPU's in the past. Cell BE being one step ahead of larabee/atom and Nvidia's new mobile chip.

Plus like I said before, this could be huge for console gaming, not just PC's. If PC's don't start getting better titles that are only available for the PC (there are still a ton of great titles, but you know what I mean), I may stick to just consoles for gaming in the future and buy a sub-laptop like the asus eepc.

Nvidia and Intel teaming up together could probably make a CPU/GPU combo that could ray trace an entire game in a short amount of time, but I doubt that will ever happen :)
 
Don't count IBM out of this either, they have released some amazing CPU's in the past. Cell BE being one step ahead of larabee/atom and Nvidia's new mobile chip.

I have found myself wondering why it is IBM doesn't enter this race with Cell/BE, none of the other companies have a chip that can touch it. From what I have read and going on a "speed" perspective Larrabee will only be doing 2.5 gigs at 45nm and I think that was supposed to be at 150W. The cell does 6 gigs at 65nm on just over 1.3v (and I think 4 gigs at like .8v) and they just brought them down to 45n in February (no idea on the new performance information).

Is IBM just not able to get the licensing for the instruction sets to do it or is there some other reason behind it?

I just find it odd that IBM has a chip that does this stuff already and by the numbers does it better then what the other three companies are coming out with and somehow isn't wanting to go head to head with them. I am sure there is something that I am missing.
 
I remember reading in a magazine a year or so back, about the idea of placing the GPU as a chip itself onto the motherboard. To help solve heating issues.

But this topic looks somewhat promising. Just more of Intel's way to try and show off its cores ability.

^^ perhaps this is something behind why AMD bought ATI.... they know the trand going forard.. so they too are already combinining integrated graphics to boost lower end cards....

now NVIDIA doesnt have the CPU backing.. so i could see that being true,or more likely, intel buying our NVIDIA if it came to that

I agree.
 
Clockspeed in itself does not mean much and we do not know enough of larrabee to compare the two archs.

You are correct, we know what the cell can do (well up to 65nm, I don't know if much information has bee released yet on the newer 45nms) but we are still for the most part in the dark as to Larrabee's abilities.
 
yeah but what I mean is a "removeable" chip that you could also upgrade. Basically a dual processor board only the gpu is the other processor.. therefore eliminating the video card itself.

It could work but then do they make upgradeable memory for it too or do you just buy a board with how ever much you want of what type (do you want 512 or 1 gig, GDDR 3, 4 or sometime 5 and so on).

Really by the time you go through that it would probably just be easier to buy an all in one card like we have now.

However to put a point in the good side for it, it may make some upgrades easier. If for example you wanted to move from a G80 core based system to a G92 you could just switch GPUs. If you already have your 512mb of your favorite GDDR3 installed on your board then you are ready to go. This could make simple upgrades like this cheaper and be more cost effective for the companies to produce. They would no longer need to put all of the stuff on every card that they make.

I suppose that such a system could have its good points and bad points.

Heh, next thing you know there will be talk of universal architecture on boards that will let you stick what ever CPU, NB/SB, GPU, mem (both system and GFX) you want on it. You just get the chipset that supports what you want, to make your board that type and go from there. That would be odd but interesting to think about. Imagine what that would do for someone listing their hardware.
 
A separate socket for the GPU, and its own GDDR RAM slots.. Sounds nice. Would put a lot in the hands of the motherboard manufacturers, though.

Also, would this constrain the manufacturers? I don't know how often they change 'sockets' seeing as they don't have anything like that right now. How long would a socket for a GPU last?
 
Well if Nvidia can give us a video card that will bottleneck a new 8 threaded Nahalem architecture or the AMD equivalent, then I have no problem with keeping discrete GPU's. Right now it takes a quad-fire setup or a quad SLI setup to truly bottleneck the best Intel has to offer.

All this talk is coming around because Intel believes CPU power will start outscaling GPU's so fast through the use of increased cores/threads that it will just be a waste to have discrete GPU's.

While I believe it could happen, we'll have to see how fast Nvidia keeps improving. If they can make a GPU that will bottleneck a 64 core CPU 4 years from now, I have no problem keeping them around.

@ motherboard configs, I doubt they will want to change the standard style of mobo design, so that will be another hurdle for all cpu graphics. But hey, Intel is a huge producer of mobos, so it could happen. I would just hate to see Nvidia and Intel making two seperate types of motherboard configurations, and ATI/AMD possibly making a 3rd. That is something I don't want to see. (Nvidia ****-blocking SLI is bad enough)

@IBM and cell/be - their market share just isn't big enough to try and compete against Intel and AMD/ATi on a large scale in the consumer market, and I really doubt they would want to. But if Sony says to IBM, hey we want another Cell/BE type chip, they will be dying to make it. I really didn't consider why they wouldn't be more into this, but it makes sense. If I was IBM, no way would I want to compete with Intel.

The more cores that we get, the more relevant and viable intregrated graphics will get, and that market is massive in and of itself.

It should also be noted that Intel has pumped up their infrastructure by a significant amount recently. They just retooled an old FAB for a new process and are currently building 2 giant new fabs. They are in gear to pull this off at very realistic prices.
 
Last edited:
Larrabee vs Fusion! I think AMD could bring 256 stream processors onto the Shanhai die and give us drivers and I think we'll getter-done!

Being able to share the objects and world geometry with the streamers through cache would be a killer system. Only thing going to the GPU is a stream of pixels.
 
At 32nm that 256 stream processor does not sound far fetched and as IBM is testing 32nm now, so it sounds doable by the time Fusion and Bulldozer comes out, but more likely in 2010 unfortunately. Their driver team seems to do a good job recently I am sure they could handle that as well.

I am more concerned about Intel's larrabee being finished in time with drivers all in place.
Should happen before next gen consoles are out otherwise going to be hard to push devs to develop for the top end of the declining PC platform raytracing games.
Without raytracing engines/ games it will be just like another Nvidia/Ati card.

No doubt this is the future the question is when, if intel sets its mind on something it will be done. Just like when MS laid their eyes on the console market, it wasn't the question whether they succeed or not but how many generations will it take for them.
Despite intel's wast resources they can't buy time and if they come late it will take another 5 years.
 
A separate socket for the GPU, and its own GDDR RAM slots.. Sounds nice. Would put a lot in the hands of the motherboard manufacturers, though.

Also, would this constrain the manufacturers? I don't know how often they change 'sockets' seeing as they don't have anything like that right now. How long would a socket for a GPU last?

That and cooling would be more significant.
 
GPU socket is something I can't see happening, not that I wouldn't love to threw in a second ninja into my gaming case for a gpu and oc the hell out of it fanless.

It would require ATI and NV to sit down and sort out the details or even Intel if it happens a year from now. Judging what I have seen from replaceable notebook graphics so far chances are slim and that they come to an agreement is even less likely.
If it comes down to it which would you choose a slot where you can choose any VGA or lock in with one vendor to the socket.
This might break up the market make things more complicated for everyone from engineers through channels to customers.
I would like it and would even give up all my slots for it and pull a cable for net to the bedroom for the gaming rig, but can't see it happen. :-/
 
I'm very interested to see how the heat generation and overclocking is going to take place on the new nehalem and shangai architectures.

If cpu's start catching up to graphics power and the heat generation is decent without overclocking, laptops will definately catchup to desktops very quickly in the gaming department. Laptops already are way ahead of desktop sales, and that trend will definately explode if integrated graphics start scaling much faster.

Pretty soon I could see desktops just going obsolete unless you're overclocking/gaming, which is pretty much where we're at now. It seems like for every lower nm process we go through, the stock tempertaure gets better and better with lower and lower VID and base voltages.

I'm very interested to see what the thermals are going to be like on larabee.

Laptop overclocking might even be viable in the future if we make a breakthrough somewhere with heatsinks, plus most of the heat from gaming laptops are high RPM HDDs (which are being replaced by generally cooler SSD drives) and GPU's. Get rid of both of those and you just reduced the thermals by a large amount already. I mean if you just had to worry about mainly the cpu for cooling, they would definately be able to design more efficient exaust mechanisms.

Anandtech reviewed a newer laptop recently (can't remember which), but it had very easily accessible cpu / HDD / GPU slots from the back (bottom). That is an awesome step forward for the enthusiast and easily interchangable laptop parts.
 
Last edited:
Back