I came across this fascinating article today.
Here’s a couple introductory paragraphs:
“. . . Intel appears to be further along toward production with a real 45nm process than anyone else. They also appear to be getting far better results, in terms of power-performance and yield, on 65nm than almost anyone else—certainly better than AMD, which rumor says is in serious trouble over 65nm yields and is not finding its way out.
“We also know, from company statements, that Intel is not using immersion lithography even in critical layers at 65nm, although they are apparently, according to at least one report, using it at 45nm. Assuming that Intel process developers are using the same physics as the rest of us, what the heck is going on?”
The article goes into a number of reasons why, but emphasisizes that a good part of the reason why Intel is doing better is because they’ve adopted restrictive design rules (RDR) for designing CPU circuitry.
Why would you want to do that?
“If you use rules that restrict the number of patterns that can occur on a mask, you free lithography experts and process engineers from having to come up with a process that can do everything—they can concentrate on doing only the patterns that are allowed. Needless to say that is a huge reduction of the domain of the problem, and frees you from having to throw every technology you can get at the solution.”
Yes, this is esoteric and definitely unsexy, but if it gets your product out the door a few months faster, that’s a pretty winning edge.
Be the first to comment