• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Conspiracy theory: Could Turing be a die shrink re-badge of Pascal?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Vishera

Member
Joined
Jul 7, 2013
I've seen this tossed out by a few people, only one or two of them could be considered journalists. The idea that Nvidia has been suspiciously tight lipped about the actual gaming performance of the 2000 series has raised some eyebrows on forums I frequent, and with two YouTubers I watch, one far more reputable than the other. I actually have a video from the smaller guy right here, and this actually is what has gotten me scratching my head the most:


As you can see in the video, at 4k high...the 2080 is actually only neck and neck with the 1080Ti. Not double the performance as Nvidia has claimed. So what are they saying? Offloading the AA to the Tensor Cores allows you to have twice the frame rate with AA on? Of course it does, the actual GPU isn't doing anything related to AA anymore, so it's no longer affected.

The whole thing just seems really sketchy to me, and it wouldn't be the first time we've gotten a rebadge from either side. When Jensen says Turing's a 10 year project coming to fruition, I think he means the Tensor Core and Ray Tracing Core aspect of it. And it's not like this would harm them at all, I mean they might catch some **** for it, but AMD doesn't have anything coming out until next year at least. Shrinking Pascal and just slapping the Tensor Cores and Ray Tracing Cores onto it would cost the least, while allowing them to charge the most. If this isn't a brand new architecture, we'll likely see the real one next year.

What do you guys think?
 
The number of transistors and a plethora of other factors point to absolutely not. The remnants of the previous gen are included on a smaller die, and likely hitting 1080+ performance, with the added features, functionality, and performance when applied. Even without the added features you're looking at a performance increase. But it still remains to be seen when folks get their hands on production cards. Do I see those features becoming another PhysX or usable computing power on existing code... Ehhh. The smaller fab process, higher clock, will still net performance in most scenarios. The extra transistors may prevent max clocks due to the thermal profile, but again, remains to be seen. I'm not jumping just yet.
 
Thank you for the short education on who to avoid on youtube for lack of creditable facts and misinformation.
 
I didn't hear the guy say it was a Pascal refresh, though. All I got from the video was to take Nvidia's performance claims with a grain of salt, based on the lack of info regarding benchmark settings. I didn't hear one unsubstantiated claim, just conjecture clearly labeled as such and the reasoning behind it. Did we watch the same video???
 
Then its the OP's fault for posting the vid and then typing an unrelated title/post? Maybe that was HIS theory and used that video to support it (though it doesn't - correlation/causation and all). :p

Either way, its bologna. :)
 
Yeah, the video seemed unrelated to the post content. The kid in the video (when did I start referring to people as kids??? Jeebus, I'm old.) seemed to have his act together. I wouldn't make him my first pick for tech videos, but that's just personal preference on my part.
 
Absent better info, I'm assuming the traditional parts of the core implementing pre-Turing features, is substantially the same as before. So if you're only looking at those, it may appear to be effectively a refresh. Of course, the extra new sauce on top of that is what makes the difference and we can't simply consider it a shrink and rename.

In a parallel with the CPU world, the biggest gains are generally not from optimising the existing features, but by adding new features.
 
Yeah, the video seemed unrelated to the post content. The kid in the video (when did I start referring to people as kids??? Jeebus, I'm old.) seemed to have his act together. I wouldn't make him my first pick for tech videos, but that's just personal preference on my part.

Especially since he's 30. :rofl:

I mostly picked the video because it illustrated the lack of performance increase, or at least a small one.
 
on high end cards, NV would never do this now the low end ones YES. look at the spec's of GT 6x0's there is one version using kepler core vs femri. then compare specs to the GT 7X0(mostly the 730's and lower) using the same two cores still femri and kepler, now the odd-ball is the GT-740, i didnt dig much. based on the memory width and core count it is not a Kepler core, one thing. i notice there seems to be no talk anymore about Video decoding engines from NV or ATI. i got the 630GT kepler for the fact it had the newer VP engine back then vs needing to buy a 100+ video card. have we gotten to the point where the video decoding engine doesn't matter, mean NV vs AMD vs Intel.
 
Then its the OP's fault for posting the vid and then typing an unrelated title/post? Maybe that was HIS theory and used that video to support it (though it doesn't - correlation/causation and all). :p

Either way, its bologna. :)

It was actually Luke Lafreniere (probably spelled that wrong) from LTT that suggested it.
 
i notice there seems to be no talk anymore about Video decoding engines from NV or ATI. i got the 630GT kepler for the fact it had the newer VP engine back then vs needing to buy a 100+ video card. have we gotten to the point where the video decoding engine doesn't matter, mean NV vs AMD vs Intel.

It's not a feature I've cared about myself, so I don't know who has what support in that area. I'm wondering, if they're all of an adequate level of support for current popular codecs (or parts thereof) that there isn't anything to add, for now?

It was actually Luke Lafreniere (probably spelled that wrong) from LTT that suggested it.

He's not working directly on tech content any more but I consider him generally knowledgeable. Be interested in seeing the context in which that happened. Like, was it a "what if..." question?
 
Back