• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Realtime GPU speeds??

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

stang8118

Member
Joined
May 28, 2004
I was just wondering what the 6800 series GPU speed would be compaired to an Intel CPU. Like about how many GHZ does it equal to?? Anyone know? It's not that important to me, but i was just wondering...
 
that would be like me asking, what's the diff between the motor in my car, and the motor powering my fridge.

2 completely diff things, i don't think they are even remotely comparable.
 
hUMANbEATbOX said:
that would be like me asking, what's the diff between the motor in my car, and the motor powering my fridge.

2 completely diff things, i don't think they are even remotely comparable.

First of all, there is no motor in your car, it is an engine. Second, a motor doesnt 'power' your fridge, electricity does.

I am sure they are comparable somehow, I mean they both process data. They just do it so differently, that I dont believe it is a straight comparison.

It would be more like a HDD and a FDD, they both store data, they just do it differently.
 
Seabee said:
First of all, there is no motor in your car, it is an engine. Second, a motor doesnt 'power' your fridge, electricity does.

first of all, yes motors can power cars. engine/motor, same diff.

go to this link:
and check out what they list as the "engine"
Engine: 1.3 Liter Renesis 13B Twin Rotor Rotary Motor

thanks.

Secondly, a fridge does have a motor in it. have you ever heard of an ELECTRIC MOTOR!!! BUHHHHHH.....that was my original point, TWO COMPLETELY DIFF THINGS. do you think that a fridge is powered by electricity alone? like if i run a current through a metal box, VOILA, its a fridge?? thanks for coming out tho.. :rolleyes:

**edit** here is some basics on how a electric motor works: http://electronics.howstuffworks.com/motor.htm read up mmk?
 
putting aside the semantic blathering going on in this thread...
i really don't think the question has much meaning, since they are designed for such different things. i mean...do you want to know how fast a gpu can compress a zip file or...? =)
given the fact that they're on comparable technologies (.13nm, .11nm, .09nm etc), I would guess you could say that they're about equally as advanced, technologically. I'm sure ATi/nVidia and AMD/Intel are both doing as much as they can to get more processing power out of a given .13nm transistor piece of silicon, etc. I don't think either industry is particularly far behind the other.
 
There was just an article release on a company that has software that uses 6800's to do audio by changing audio data to digital and using the 6800's massive computing power to code it somehow. I'll look for it. It actually made a reference to it compared to a P4. If you do a search it may be on this forum that I saw it.

This thread:
http://www.ocforums.com/showthread.php?t=326760

May not be what you are looking for, but the reference to a GPU doing CPU work is there.

"BionicFX has announced Audio Video EXchange (AVEX), a technology that transforms real-time audio into video and performs audio effect processing on the GPU of your NVIDIA 3D video card, the latest of which are apparently capable of more than 40 gigaflops of processing power compared to less than 6 gigaflops on Intel and AMD CPUs."
 
In matrix calculations, GPU's can be more than ten times faster than the fastest Pentium 4's.
 
CamH said:
In matrix calculations, GPU's can be more than ten times faster than the fastest Pentium 4's.

haha, in all that mess somebody knew what they were talking about! :thup:

With some special software that is "currently being developed" (honestly, it shouldn't take that long since all they have to do is use an existing graphics standard) the GPU and video ram can be used to perform regular x86 instructions in order to run software. According to the company developing the software, a radeon 9800xt had the same processing power of over 10 Pentium 4s. Whether this is true in practice has yet to be seen, but in an effort to answer your question as acurately as possible:
CamH said:
In matrix calculations, GPU's can be more than ten times faster than the fastest Pentium 4's.
 
LOL, ok so now when PCI express becomes wide spread, people are just going to by 3 or 4 high end video cards, run this so called software and have a computer that blows away the fastest super computer that is currently on the planet. Things that make you go Hmmmmm. (Arsenio Hall)
 
Capt Fiero said:
LOL, ok so now when PCI express becomes wide spread, people are just going to by 3 or 4 high end video cards, run this so called software and have a computer that blows away the fastest super computer that is currently on the planet. Things that make you go Hmmmmm. (Arsenio Hall)

Well in theory it sounds like a great idea, but when you consider the fact that high end ideo cards cost over $500, a motherboard to support 4 x16(or more) pci-e slots will probably cost about $200 and it will probably need at least two cpus and ram for each (let's say $500 per cpu and about $500 for ram), that's a pretty frickin' expensie computer. I mean, $4000 for just a basic computer, not including drives or anything, is kinda rediculous. Although I'm sure somebody would pay it so....
 
ok, you can all quote me on this...

I will personally (I may have to aks for help with some stuff because I have zero graphics programing experience outside of basic) create a program to run nearly entirely (it IS impossible to have a program that does not at least initialize using the CPU) off of my own radeon 9800 pro-->xt 128MB card. If I succesfully do this by the end of october then you all owe me a pat on the back...

I'm not saying that any of you are saying it isn't possible, but I know exactly how to do it and so I'm gonna prove it...
 
And a pat on the back I will give you indeed! :) Programming with the GPU is a subject I've been interested in ever since I found GPGPU, but nothing really has appeared yet (other than a benchmark and raytracer I found). Good luck with your project!!

JigPu
 
Wow this is some great info. I didn't know GPU's were that far ahead of CPU's. I would assume there arcutecture is alot differant, since intel/amd haven't come out wiht anything close to what a GPU could put out.
 
This could lead to something big.... Just imagine (I know this is far-fetched) ATi and nVidia dominating AMD and Intel. That is, IF, you can somehow convert the gpu power into cpu power (calculating plygons into ecuations and all that hullabaloo). This seems pretty interesting.
 
Last edited:
The problem is that GPU's are not very good at general computational tasks. As far as I know, they are only insanely fast in matrix calculations. The average CPU will beat them in nearly anything else.
 
Precisely. GPUs are definatly designed with paralel/matrix operations in mind and CPUs with serial ones in mind (though MMX/SSE/3DNow! brings matrix to the CPU). Just look at them... If you take the X800, it's got 16 pipes (capable of doing work in parallel), each with 2 72-bit vector, 2 24-bit scalar, and 1 96-bit texture unit to work with. That's a LOT of stuff you can do at once! The problem is that it's fairly slow (500ish MHz), so will get completely beat if you can't keep the pipelines busy with independant parallel instructions. If you need something done serialy, you're really going to be hurting on a GPU.

JigPu
 
Back