• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Nvidia Fermi and REAL TIME ray tracing!!!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I love that hair demo, can't wait for that level of model development to be properly implemented into games :eek:
 
well even cpu's can do RT in realtime... i dont get the big deal? i didnt hear what resolution they were using. that will effect FPS and when i looked into RT. it wasnt made clear if caching helped performance. what is clear is the number of cores helps performance in RT to FPS. according to a intel demo when you double the cores per the same res, the FPS doubles as well.
 
Didn't nvidia just say not to expect realtime ray tracing?


Heh, .6fps and .2fps, i suppose you could call that realtime, sortof. They aren't even known to be running on fermi cards. It could just as easily be someone wanting to put out some good GF100 publicity. I find this even less believable then Charlie.

Plus, a youtube video of another video site? Why?
 
I thought they did, but i can't find any links.

Is a frame every 1.6 seconds real time?
 
lol i dont think so bob... even if intel used four boxes with 4cores per box for their RealTime RT for Q:ET. considering how much more powerfull a gpu is compared to a cpu. there seem to be somethings still a GPU just cant do good enough, no matter how hard they try.

i could only image now intel with four boxes each using a hex core cpu's, now that would be cool.
 
I thought they did, but i can't find any links.

Is a frame every 1.6 seconds real time?

A frame every 1.6 seconds is fast enough to call the program interactive. Compare it to a CPU render that might take several minutes (or hours) every time you move the camera angle or change a light source or whatever, and it's a very real advantage.
 
Oh yes, huge advantage, slightly interactive, but is it "real time"? That was the question.
 
I'd like a definition. Ideally from the people that are saying "real time ray tracing".
 
A frame every 1.6 seconds is fast enough to call the program interactive. Compare it to a CPU render that might take several minutes (or hours) every time you move the camera angle or change a light source or whatever, and it's a very real advantage.

well comparing videos then, since that is all we have. intel is ahead of interactive RT then NV.

if having Quake3A converted to RT and being able to play the game like you would any other. Is not real time enough, then i dont know what is. while this next link covers Quake 4 there is a link to videos/pic's of Quake3a RT'd.
http://software.intel.com/en-us/articles/quake-wars-gets-ray-traced/

Add to that intel has also demo'd RT on Larrabee, at a past IDF.
http://www.techpowerup.com/68545/Quake_4_run_Ray-tracing_Enabled_on_Intel_Larrabee.html

so RT is interactive enough for NV right now, but looks so far to be behind intel's.

**
real time RT for me means at least 24Fps at a decent res(ie 1280x1024).
 
I've been really impressed with Nvidia's software progress in GPGPU. ATI is slacking a bit there.

That's because ATI's focus is on games not GPGPU functionality. Ideally the perfect company would sit between ATI and Nvidia on most of their business practices.
 
That's because ATI's focus is on games not GPGPU functionality. Ideally the perfect company would sit between ATI and Nvidia on most of their business practices.

I just think that's a really big mistake. As AMD and ATI make hybrid CPU/GPU's, they'll need that GPGPU expertise. No way their next gen GPU can simply ignore it like this generation did. There's just so much software that can benefit.
 
I'd say it's less that they're ignoring it and more that they're waiting for OpenCL to gain a larger foothold. That may well be a mistake, as it's not that likely to gain a foothold without someone actively pushing it, hard, like nvidia has with CUDA.
 
Back