• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

nVidia Kepler GTX700 (600?) series info here ->

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
exactly lol, but who knows Nvidia could be skipping the 600 series and jumping to the 700 just because the 600 would be a small bunny hop behind or ahead, and wouldn't last long.

In which case, wouldn't that make it the Maxwell instead of Kepler? If that is the case then i can imagine a large jump like that because the fermi is pretty burned out. But if Nvidia released a card with performance like above we're in for a GPU civil war, it would go against all marketing techniques that have been holding us back since the 8800GT, imagine a world where a video card isn't out dated for atleast 2 1/2 years... it sounds like Armageddon to me. (That's a little pun because it's rumored the generation code name after Maxwell is "Armageddon")

:bs:

just because they may be skipping the 6xx series of naming does not mean that they are skipping the architecture that would have gone with it.

This is going to be Kepler not Maxwell.
 
just because they may be skipping the 6xx series of naming does not mean that they are skipping the architecture that would have gone with it.

This is going to be Kepler not Maxwell.

This is true, i will still stand by my feeling that the chart is as fake as the one i edited in paint. whether it's different architecture or a 980;990 they would shoot everything marketing is about in the face by releasing something with that kind of improved scale.

not saying it can't or won't happen, just not very likely in my opinion.
 
I agree that it is unlikely to be as good as the graph shows. But it would be sweet.

I hope for AMD it is not true. They need to be able to compete in atleast one market.
 
I would expect the latest and greatest from nVidia to beat the 7970. For one thing, its being released well after the 7970. Being nVidia, it will cost more also! The cycle repeats..
 
26 Jan 12 // Latest info from a group of websites including xbit, fudzilla etc..
Kepler GK 104, looks like it will have a 256bit bus, with 2G Ram, a 225W TDP, and an expected $299 price tag. Looks like it is going to be the GTX560/70 successor.

Other sites estimate that there will be 768 shaders to achieve the expected 2 Teraflops, but it is unclear to me if that speculation is the GK-104 (midrange), or the full thermal envelope of the final top Kepler chip.
**Some estimates are projecting as many as 768 CUDA cores or "well above 2 teraflops" of raw performance

Introduced technology advantages would include 64-bit floating operations of 4 times Fermi's capacity per watt. Another would be the virtual memory space shared between CPU and GPU.

Additionally, noticed this wikipedia entry, not sure what to think of it, or where that info came from other than the supplied Toms Hardware link: http://en.wikipedia.org/wiki/GeForce_600_Series

http://news.softpedia.com/news/Nvidia-GK104-Kepler-GPU-May-Be-Priced-at-299-230-248594.shtml

http://semiaccurate.com/2012/01/23/exclusive-and-the-nvidia-keplergk104-price-is/

** http://www.fudzilla.com/home/item/25636-nvidia-28nm-gk104-gpu-specs-revealed

Additionally, here is some decent math figured out based on TDP and die shrink area, to figure out a possible ratio for clock vs number of shaders:

Look at the reference models of GTX570 and the 448-core GTX560 Ti, all frequencies and Vram sizes are a match, except one has 32 less CUDA cores and accounted for from consuming 9W less power typical load. Doing the math reveals the die of the 448-core GF110 consumes 126W of power while the 480-core GF110 consumes 135W power, and the GK114 384-core ends up down at 115W taking into account the higher frequency. A shrink to 28nm takes 51% of the area out of a 40nm die, which is approximately porportional to wattage loads, so a 28nm 384-core would end up 62W draw. This would make a 28nm version of GTX560 Ti at 110W -- same as GTX550 Ti, maybe it could get called "GTX650 Ti".

But if we double the cores to 768 then we're right back up to 170W card, so what accounts for 225W? The GPU would have to have either 1024-cores at 1770Mhz, or some other combination with higher frequencies or less cores. Or the Vram frequency would increase, but the Vram isn't the majority of the other circuitry, so it may not account for that much.

My own discussion about this:

Of course this is rough math, but some things make sense. The change from 40 to 28nm does translate to a 51% decrease in area, calculated here ( 28 x 28 ) / ( 40 x 40 ) = 0.49

What the person in the quote above didn't account for, is a chip shrink. If the chip were to be the exact same size as Fermi, then yes, a 51% increase in total shaders would happen. Of course, this is a new chip, so the size might change. So this math is not irrelevant, but too many factors are left out (frequency, # of shaders, and die size) to get a final conclusion. If any of those values would come out, then we could indeed play around with some more concrete figures.

Another thing could be the fact that if the final chip is 1024 shaders, the 660 would be 768 shaders, and the chip size would be very similar to fermi's, then you could conclude that the 170watts simply means that 25% of the card wasn't used aka its the same chip as the GK-110 but with a quarter disabled. ~170 x 1.25 = ~212.5... this brings it close to 225, but then you can also assume that GK-104 could be clocked slightly higher to compensate for the lower shader count (commonly used in the midrange sector), so 225W would make sense at this point
 
Last edited:
some of the info leaked so far was just proven to be direct nVidia marketing from posters not registered as nVidia employees.

As usual most of the leaks stem from chiphell and the moderators of the site received proof that the source of the leaks where from nVidia marketing shills so they where removed.

http://www.chiphell.com/thread-349828-1-1.html

The leaks may be correct but the info leaked so far should be taken with a huge grain of salt considering the extreme shadiness of what was just revealed.

nVidia has been caught doing this before

http://consumerist.com/2006/02/nvidia-focus-group-member-details-hidden-program.html
http://www.boingboing.net/2006/02/06/did-nvidia-hire-an-a.html
 
Last edited:
Right, but die shrinks rarely perform exactly according to the math. Even if the die size at 28nm is theoretically 49% of the size 40nm would be, that sort of scaling is hardly ever achieved in actuality. All you can really do is define the upper limit for what might be possible if they stick to same overall die size and core design.

Theorycrafting is fun, but I can't wait until it's over. :cool:
 
Kepler currently shipping to notebook manufacturers

http://www.fudzilla.com/home/item/25747-nvidia-shipping-kepler-to-notebook-manufacturers
We have confirmed that Nvidia has started full production of Kepler notebook parts, something that can represent the mainstream part of the market and below. Manufactures are getting the chips as we speak.

The first massive batch is expected shortly after Chinese New Year, so let’s say mid-February and after that time most Nvidia Kepler early adopters and Ivy Bridge machine supports will start making its products based on new 22nm CPUs and 28nm Nvidia graphics. Of course, Optimus is a key feature that got Nvidia that so much traction. It doesn’t cost almost any battery life, unless you play a game or you need your GPU to take some heavy load and give you some better frame rates.

The launch date for both Ivy Bridge mobile CPUs as well as Kepler mobile parts is the first or second week of April, with April 8th being the date we’ve heard.

Execs from Nvidia's notebook division have already told us that they expect more design wins with Ivy Bridge than with Sandy Bridge generation, and we even heard that more than 300 designs might be the magic number.

Nvidia wants to focus on mobile market first, as Ivy Bridge makes a big milestone in notebook industry and the fact that Nvidia can get even to ultrabooks with Kepler, speaks for itself.

More to read; Kepler in Ultra books
 
Frankly, even for speculation, this website sucks. Stay away from it. Why?... i know them.

Its not from them, they just posted it from another site... and yes I am aware how fake things can be right now, but its just for discussion, fun and entertainment until we get concrete numbers.
 
Im so ready to buy a new GPU for my new build but im getting so tired of waiting on Kepler. Nvidia is losing alot of steam to AMD everyday that goes by without a release. People want the new stuff and if AMD has whats new then guess what, theyre gonna get the $$$. Im still holding off but right now my choices are between a 580 and the 7970. Guess which one would get the nod...
 
If the specs for the GK-104 (GTX660) are true then I'm probably taking that option.

Yeah.. well based on history, the 8800GT to the GTX260 to the GTX460 to the GTX560ti, well there is always about a 30% hop in performance. So if the gtx660 ends up with 30% more than the GTX560/ti, then I will SLI a pair of 660's. Time to look for a 750-850W PSU :D
 
Im so ready to buy a new GPU for my new build but im getting so tired of waiting on Kepler. Nvidia is losing alot of steam to AMD everyday that goes by without a release. People want the new stuff and if AMD has whats new then guess what, theyre gonna get the $$$. Im still holding off but right now my choices are between a 580 and the 7970. Guess which one would get the nod...

Nothing's keeping you from gettine the current best.. But of course, you are so close to the release date, the 7970 is probably going to be cheaper when the kepler line comes out anyways..

If nVidia want to crush AMD, they would have to release two cards at launch; top end, and top midrange.. The top midrange would have to compete with the 7970. Same as when the 8800GTS and 8800GTX were released.. They made ATi need a diaper change. :D
 
Back