Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!
45 to 32mn = die shrink. The term "die shrink" is usually more associated with "process shrink" so yes it's a bit of a misnomer, but whether or not the die size actually shrinks, decreases in process lithography geometry are called die shrinks.
In this sense we're talking about die size reduction which would equate to more dies per wafer. It was claimed that going from 32nm to 22nm doubled the dies per wafer.
Ahh, didn't know the conversation had gone that direction. However, fact remains that 32nm to 20nm is still considered a "die shrink" regardless of what the final die area is after the final architecture is determined. Syntactically wrong, but colloquially correct. Just how it is. If all die shrinks were truly "die area shrinks", then CPUs would be the size of a pinhead by now.
Even if 32 to 22 did double the dies per wafer, it wouldn't double the final die production per wafer. At 22nm the fraction of failed (unsellable) chips is much higher than at 32nm. i.e. the yield is lower at 22nm than at 32nm.
That said, using these small-process CPU architectures in mobile devices at significantly reduced voltage reduces the heat load per mm^2 that needs to be removed from these vanishingly small traces.
Intel showed off the first 32nm SRAM test chips at the IDF in 2007, and this year they showed us their first 22nm SRAM test chips. With the 22nm process, Intel will be able to produce four times the number of chips per wafer of the 45nm process, thus making CPUs even cheaper once the process reaches mature yields.
Yields are very good and defect density is steadily decreasing.
Source 2009 article!!! Not new information!!
4 times as many chips per wafer as 45nm. (referring to cache I think???? SRAM ??? which is larger than cores anymore)
As for comparing q6600 to sandybridge or ivy bridge is not accurate since no one makes Q6600 anymore. And it has neither the cache, the IGP nor the IMC that future chips did. I thought the lynnfield die size was a bit more accurate, it had IMC and IGP and more cache than Q6600.
Anyway, it was not MY claim that the number of chips doubled, it was news sources that claimed it. I just repeated it
Also this part
They do not see it is decreasing vs number of defects on previous sizes, but do say it is decreasing. What that means I do not know.
Nvidia discussing die shrink
Now nvidia seems to be saying what wingman is. They are complaining (despite what the slide shows) that dieshrinks talk longer to hit a price/transistor ratio that is equitable. OF course their die shrink from 90nn to 55nm also saw a transitor count move from 681M to 1.42B !! in one generation.
From your link Quote:The move will most likely start during the second half of next year and it will probably be quite expensive, as it is estimated that the costs of the shrink will only be amortized after building and selling at least 100 million 20 nm chips.
Do you realize the street value of this mountain? It is pure snow. I loved that movie Better Off Dead.
Ahh see what I saw on the chart was that the older "more cost effective" transitors took a longer time to drop in price over the newer ones I didn't see the difference in the bottom line. I thought given the short life span of each arch a quick drop in price was more beneficial than the steady decline into obsolesence.
Except the 28nm and further shrinks not only drop faster in price but go lower. So I still do not get it.
For others that did not click the link...
It took 3Q for 40nm to become lower transistor price to 80nm or 55nm(spanning years), and the same 3 Q for 28nm to meet 40nm in price per transistor. (projected same 3Q for next couple of die shrinks) It drops 80% of cost from start to less than 1 month in. which means projected vs actual. And continues to decline as inventory expands and supply crushes the prices down.
As an aside Nikon bravadoed 200 wafers an hour at 20nm using their equipment. Surely more per hour means less cost especially when talking about supply.
Now I do understand that this argument is saying that "If same transistor cost" a 22nm design with more transistors costs more than 32nm chip with less than half the transistors.
And there is more to it then that. As mentioned earlier the 45nm Q6600 only had 582 Million transistors. So a sandy B. with 995 million. Adds in a IGP, IMC and more cache memory.
So nVidias charts show that despite the reduction in transistor cost, the increase in transistors leads to a higher price.
See I am capable of seeing both sides.
The only issue is, nVidia is the only one claiming that. Everyone else claims a reduction in cost. Now, Intel's relatively recent move to integrated GPU and IMC and the increased cache is offset by not building that into their chipsets. No advanced IOH extra power etc.
Now all you need is a digital to analog converter for the FDI and your off and running. ASUS has made money on their separate chips that allow remote control of everything, despite that being built into Intel chips on workstation chipsets. A value add Intel did not need to add, but why ASUS software needs Intel MEI to work.
Anyway back to the nVidia point. nVidia is complaining they can no longer justify $500 flagship costs at sub 28nm. Basically that is what they are saying. Which means software developers are no longer going to get their stipends to design TWIMTBP games, which is good news for AMD!
IF CPU designers were saying the same thing, it would be different. Did AMD say 20nm is going to drive up costs? Did Intel say that? Did ARM say that? No.
Apparently only nVidia says it. Which might mean something, but not to me, or most people.
Back to double cores per wafer each die shrink. I didn't say it, I repeated it, Intel said it. Would Big Blue lie? Despite the supposed doubling or whatever of transistor counts.. die shrinks still happen. L# cache is the size culprit. Its why SB-E chips are so big. Remember when 16 MB of RAM used to be a 3.5" PCB you added to a mobo. Ever think it could run at 6000 MHz?
Oh here is another good pic of SB-E
Cost per TRIGATE transitor might go up but trigate might be 5% of every CPU...every DIE is an 8 core....
The move will most likely start during the second half of next year and it will probably be quite expensive, as it is estimated that the costs of the shrink will only be amortized after building and selling at least 100 million 20 nm chips.
What Nvidia is saying is this. Look at 50nm. It starts at 0.8 normalized cost per transistor in Q2 2007 and reaches 0.3 at Q4 2010.
Now 2 quarters prior to that they started 40nm which shot the price up per transistor to 1.0 (or above) but it's advantages were A)higher transistor density and therefore a more powerful GPU, B)It's normalized cost dropped pretty quickly, and most importantly C)40nm didn't require massive and extremely expensive changes in lithography equipment and production line retooling so the additional cost (the higher normalized cost per transistor) to produce (which you've mistakenly confused with consumer price which is affected by inventories etc.) is fairly quickly recouped.
Now look at 28nm to 20nm. 28nm starts at Q3 2011 and we see it go to Q3 2013 where they show the projected 20nm process start. So at Q3 2013 with 28n they project to be at a production cost at just north of 0.2 per transistor. Best case scenario for 20nm production cost is for it to match the same level as 28nm. So what incentive do they have to switch to the next node size of 20nm when it's going to initially jump up significantly compared to where they currently are and has no savings potential in the future and is starting to take longer to normalize cost as they go smaller instead of taking less time as they were seeing previously.
What you're failing to see is that even with intel as the node size progresses to 20nm and smaller, each fabrication company or division of a company is having to invest large amounts of money in new and much more expensive lithography processes that were previously unneeded. This is because the size of the node has reached a point of being so small that the processes they've used in the past don't have a resolution high enough for wafer yields to remain at the same level they were. Which is why you see on the next slide where wafer costs are rising significantly and exponentially. In other words they've reached a tipping point in the production of IC's wether they are CPU's or GPU's that's getting more and more expensive to produce and requires longer and longer to recoup the costs.
http://news.softpedia.com/news/20-n...Year-for-AMD-Qualcomm-and-Nvidia-270723.shtml
That's precisely why they are looking to new technology to solve this problem by looking in areas such as a replacement for silicon.
Here's another interesting article to read.
http://www.electroiq.com/articles/s...ool-explores-tradeoffs-at-20nm-and-below.html
Source:intelThe move to 34nm will help lower prices of the SSDs up to 60 percent for PC and laptop makers and consumers who buy them due to the reduced die size and advanced engineering design.
Intel will need to reach a new manufacturing process every two years; this would imply going to 14 nm node as early as 2013. However, for Intel, the design rule at this node designation is actually about 30 nm.[4]
Source:http://www.eetimes.com/electronics-news/4374611/GloFo--TSMC-report-process-tech-progressMeanwhile at least three companies are about to participate in Globalfoundries’ first multi-process wafer to qualify its 20 nm process also in New York. The company expects to run several of the shuttles this year so it can start 20 nm production early in 2013 with tested third-party IP where needed.