• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

GTX10xx, Polaris and Vega discussion.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
That isn't that part that people would be questioning M@rk... ;)

I mean, They OC their card to 2.1 stable BUT they turned Vsync on, Water cooled 1080 will probably cool very good and may reach 2.5 stable but it will probably be the limit. I dont think its worth getting the card if you plan on overclocking for lets say contests and stuff
 
Who did? Are you posting in the right thread? I don't see vsync mentioned anywhere. In the 3Dmark article nobody enabled vsync. I'm struggling again to understand what you are trying to say, sorry.
 
If 2100mhz is "that easy" makes people wonder why they would bother gimping it from the factory so much.
 
Who did? Are you posting in the right thread? I don't see vsync mentioned anywhere. In the 3Dmark article nobody enabled vsync. I'm struggling again to understand what you are trying to say, sorry.

I'm talking about what Nvidia showed at the 1080 anounce, They enabled Vsync so it wasn't at 100% load.
Sorry if you don't understand me, My mother language is Russian and I speak Hebrew 90% of the time so I can't practice English much.
 
By your logic here you can say the same about all Maxwell cards...
With reference being 1000MHz base clocm on the 980ti and with overclocking + boost averaging (bell curve) 1475- 1525, they are in the ballpark.

1080 has 1600mhz baseclock and we are seeing clocks over 2100mhz...but the 2.5ghz number would blow that ~33% out of the water.

I'm talking about what Nvidia showed at the 1080 anounce, They enabled Vsync so it wasn't at 100% load.
Sorry if you don't understand me, My mother language is Russian and I speak Hebrew 90% of the time so I can't practice English much.
your English is better than some native speakers I see post here. :)

I just had no idea what you were referring to in the vsync thing. :)
 
EarthDog said:
your English is better than some native speakers I see post here. :)

I just had no idea what you were refering to in the vsync thing. :)

Thanks! :)
Yeah the problem with my English is that I don't known a lot of words and can't express myself properly, And i have a problem that i cant express myself normally( No matter what language) so i cant explain things good :-/.

Btw I learned my English myself chatting with people, Watching videos and listening to songs so I don't really know things they teach at school like verb³ and tenses and all that stuff, But do I really need these IRL? I mean I need big vocabulary and good grammar but other than that? Do I really need anything else? (Not talking about jobs like writing things and all that
 
Last edited:
Way off topic...but I'd learn the verb tenses, yes. You are doing well for that kind of informal learning!:)
 
Not even remotely close... 1000 mhz overclock on Maxwell? Which one? Maybe on ln2...

It's about percentages, not the exact frequency gain.
Point is there are huge percent gains on Maxwell when overclocking (in regards to reference speed), and there are (supposedly) again on Polaris.
 
I did that math already. :)
With reference being 1000MHz base clocm on the 980ti and with overclocking + boost averaging (bell curve) 1475- 1525, they are in the ballpark.

1080 has 1600mhz baseclock and we are seeing clocks over 2100mhz...but the 2.5ghz number would blow that ~33% out of the water.

If we are seeing the bell curve for Pascal around 2100MHz, then its the same at ~33%. If we are seeing the 2.5Ghz that is rumored, then it smokes it. So... it depends.
 
I did that math already. :)


If we are seeing the bell curve for Pascal around 2100MHz, then its the same at ~33%. If we are seeing the 2.5Ghz that is rumored, then it smokes it. So... it depends.

Even so, we have never (to my knowledge) seen GPU's overclock the way Maxwell did.
It's not crazy to think Polaris will overclock the same as, or better than, Maxwell.
And also, to further answer bob's question, it's possible they're keeping speeds down to keep TDP down.
 
Hey guys, I saw a couple people talking about 14nm vs 16nm for these new cards. I want to clear up some MAJOR misunderstandings for the silicon transistor feature size.

First off, NEITHER of these GPUs will feature TRUE 14nm or 16nm transistors. Intel is THE ONLY company that has true 14nm feature size FETs. (source_1, source_2). These FETs are actually 20nm at feature size, but work in the 14nm/16nm description. Globalfoundries (creating the 14nm AMD Polaris) and TSMC (creating the 16nm Nvidia Pascal) can use these terms because their FETs perform roughly at 14nm quality but have a slightly larger footprint.

What does this mean to the users? Its marketing play all over again. All these GPUs should be considered 20nm FIN-FET. Since they are FIN-FETs rather than planar FETs, there are major benefits. The first noticable benefit is power. Reduced size, and using FIN-FET decreases voltage threshold, and decreases current leakage (means major power saving). Size reduction brings in more transistors, and its easier to place transistors where you need them. Since power is reduced, speed can be increased (as seen by the OCed 1080).

All in all, the new silicon shrink for GPUs have been needed for a long time. The benefits will be there. However, don't get confused by marketing! Its not true 14nm/16nm its more close to 20nm :)
 
Hey guys, I saw a couple people talking about 14nm vs 16nm for these new cards. I want to clear up some MAJOR misunderstandings for the silicon transistor feature size.

First off, NEITHER of these GPUs will feature TRUE 14nm or 16nm transistors. Intel is THE ONLY company that has true 14nm feature size FETs. (source_1, source_2). These FETs are actually 20nm at feature size, but work in the 14nm/16nm description. Globalfoundries (creating the 14nm AMD Polaris) and TSMC (creating the 16nm Nvidia Pascal) can use these terms because their FETs perform roughly at 14nm quality but have a slightly larger footprint.

What does this mean to the users? Its marketing play all over again. All these GPUs should be considered 20nm FIN-FET. Since they are FIN-FETs rather than planar FETs, there are major benefits. The first noticable benefit is power. Reduced size, and using FIN-FET decreases voltage threshold, and decreases current leakage (means major power saving). Size reduction brings in more transistors, and its easier to place transistors where you need them. Since power is reduced, speed can be increased (as seen by the OCed 1080).

All in all, the new silicon shrink for GPUs have been needed for a long time. The benefits will be there. However, don't get confused by marketing! Its not true 14nm/16nm its more close to 20nm :)

Aw shucks, missed it by a nanometer!
 
Hey guys, I saw a couple people talking about 14nm vs 16nm for these new cards. I want to clear up some MAJOR misunderstandings for the silicon transistor feature size.

First off, NEITHER of these GPUs will feature TRUE 14nm or 16nm transistors. Intel is THE ONLY company that has true 14nm feature size FETs. (source_1, source_2). These FETs are actually 20nm at feature size, but work in the 14nm/16nm description. Globalfoundries (creating the 14nm AMD Polaris) and TSMC (creating the 16nm Nvidia Pascal) can use these terms because their FETs perform roughly at 14nm quality but have a slightly larger footprint.

What does this mean to the users? Its marketing play all over again. All these GPUs should be considered 20nm FIN-FET. Since they are FIN-FETs rather than planar FETs, there are major benefits. The first noticable benefit is power. Reduced size, and using FIN-FET decreases voltage threshold, and decreases current leakage (means major power saving). Size reduction brings in more transistors, and its easier to place transistors where you need them. Since power is reduced, speed can be increased (as seen by the OCed 1080).

All in all, the new silicon shrink for GPUs have been needed for a long time. The benefits will be there. However, don't get confused by marketing! Its not true 14nm/16nm its more close to 20nm :)

8nm is still a good improvement :D I heard the silicon days are coming to an end, interesting, What will they use for future CPUs (When the silicon die cannot get any smaller anymore), I know silicon will still be around for at least 5 years(Probably alot more).
 
That is a different post all together. If a lot of people are interested I can surmise the future of silicon and where technology is going in the next ~5-10 years.

Its not exactly 8nm difference, more like ~2-4nm since most companies are coming from 22nm Planar FET.
 
That is a different post all together. If a lot of people are interested I can surmise the future of silicon and where technology is going in the next ~5-10 years.

Its not exactly 8nm difference, more like ~2-4nm since most companies are coming from 22nm Planar FET.

Yes please... link thread here please.
 
That is a different post all together. If a lot of people are interested I can surmise the future of silicon and where technology is going in the next ~5-10 years.

Its not exactly 8nm difference, more like ~2-4nm since most companies are coming from 22nm Planar FET.

Maxwell was 28nm And AMD GCN GPUs are also 28nm, At least thats what wikipedia says (Reffering to Nvidia, Couldnt find any info on the Nvidia website :sly: ).

And I would like to learn about the future of silicon and where technology is going in the next ~5-10 years :D
 
Back