• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

When will GPU's surpass CPU's?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Fanman

Registered
Joined
Aug 29, 2001
Location
Mayne Island B.C. CANADA!!
All this recent NV30 hype has got me to thinking. Since CPU's double in speed every 12-18 months, and Video Cards double in speed every 6-9, how long do you think it will be before GPU's are outworking CPUS? Or do you think they'll hit a plateau?
 

DeadPool

Disabled
Joined
Jun 28, 2002
Location
Salina, Kansas
Well speaking from what knowledge i have (which isn't much) I'd say, game manufactures would have to build the game engines more based on the graphics and not having the cpu doin calculations. So they'd only go as fast as the current processor in my opinion at this time.
 

OC-Master

Member
Joined
Jul 14, 2001
Location
Edmonton, Alberta
Fanman said:
All this recent NV30 hype has got me to thinking. Since CPU's double in speed every 12-18 months, and Video Cards double in speed every 6-9, how long do you think it will be before GPU's are outworking CPUS? Or do you think they'll hit a plateau?

Technically, GPUs have already surpassed CPUs.

The NV30 (GeForce FX) is so complex that its equivelent to amount of transistors from more than two Pentium4 northwoods. Combine this along with the fact that this GPU has 8! pipelines which means technically, thats like 500MHz X 8 = 4000MHz.

Bascially, a Celeron at 4GHz is about as fast as NV30. I'm talking is RAW performance if possible.


OC-Master
 

imgod2u

Member
Joined
Jun 8, 2002
Location
Isla Vista, CA
It would depend. You can't exactly directly compare GPU's to modern CPU's because one is a dedicated rendering machine that has limited functionality (i.e. it can only do certain things) while the other is a all-around logic processor that can do just about anything. Why isn't AI calculations done on the GPU? Because it can't, GPU's only work by taking in triangle data from the CPU, shading and rendering. It can't do all the stuff before that. So there's really no way to compare the two as they do different things.
As far as rendering, if you want to compare raw FP numbers then a modern GPU is far superior to any CPU out there.
 

ninthebin

Member
Joined
Mar 24, 2002
Location
Liverpool, UK
as imgod2u said, a GPU is basically an extention of a CPU that handles graphical computations...

dunno how accurate I am on this but is it a VPU is a true independant GPU from the CPU? or is it something else?
 

imgod2u

Member
Joined
Jun 8, 2002
Location
Isla Vista, CA
VPU is a "Vector processing unit". It's the name of the unit that does the calculation for vectors in the GPU. Most vectors contain (I think) 4 data types, a x, y, z coordinate and a direction. These 4 data types are then operated on by an instruction. In essense, it's a part of the GPU (at least, some GPU's). The current GF4 uses dual Vertex Shaders which contain some type of vector processing inside it but they're not specific units I think. The 3dLab P10 uses separate VPU's as well as the PS2 and the upcomming NV30.
 

Jeff Bolton

Member
Joined
Sep 15, 2001
Location
Middle Peninsula Virginia
do you guys think it'll ever get to the point where instead of a prefabricated video card we'll be able to buy a bare agp card where we can install our own gpu and memory, much like a mobo? except it still fits into the agp slot like the cards of today? that'd be a huge jump into the total customization of computer components.

jeff
 

h20link

Member
Joined
Jan 12, 2002
Location
down at fraggle rock
that would be cool.

I think video card makers would also have to look at the issue of how much game developers would be able to pack into a game for a video card to require that much processing power. I think the GPU speed is rapidly outpacing the graphics that can be developed by most game studios....
 

imgod2u

Member
Joined
Jun 8, 2002
Location
Isla Vista, CA
The problem is, in order to insure that these massive GPU's and memory chips work at the rated speed that they need to be. Manufacturers have to sort of "weld" them into the card itself. Having a slot would significantly reduce the possible signal propogation speed between the GPU and the rest of the card.