• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

90nm was pretty bad what makes 65nm easier?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

madcow235

Member
Joined
May 27, 2002
Location
Purdue University, IN
90nm has been a very very slow and poor transition for both intel and amd, and IBM. There has been a little talk of 65nm, especially when speaking of multi-core cpus, and everyone seems to think 65nm will actually work. I'm thinking at 65nm all hell is pretty much going to break loose since at 90nm the voltage requirements don't scale well at all when compared to 130nm. I think we've hit basically a brick wall when it comes to getting smaller with the current technology. Is there going to be new ways of making circuits or are we going to have to rely on a revolution, SOI or Strained Silicon?
 

Duesman

Registered
Joined
May 19, 2004
Location
Bellevue, Wa
Just wait and see. 90nm is just being introduced to the rest of us, and knowing how companies try to force out new stuff, i would bet 65nm will come sometime in the future. We'll just have to see who get creamed by that wall.
 
Last edited:

Alacritan

Member
Joined
Jul 29, 2002
Location
Kingston, NY
They'll use 65nm with multicores and run each core at a lower clock speed and voltage so the leakage problem isn't an issue. You could easily have a 65nm dual core, each running at 2.0ghz. AMD is nearing 2.8ghz at 90nm, and 2.6ghz at 90nm is no longer a problem at all. IBM is already starting production on 65nm processors, they just run at a lower clock speed. The combination of lower clock speed and lower voltage makes for a rather large reduction in power consumption and consequently, heat production. Thus, it's better to have several cores running at a lower clock speed and voltage. The final outcome is better performance using less power. At least, that's what I've gathered. Single core processors are done after 90nm at these speeds. AMD and Intel will continue to try to increase CPU speeds until they have dual/multicore processors worked out and ready for production.
 

Drec

Member
Joined
May 23, 2004
na dual core is definatly the future. i doubt they'll be making 65nm anytime soon.
 

Quailane

Member
Joined
Oct 18, 2003
Thus, it's better to have several cores running at a lower clock speed and voltage.

I don't think so. For me there would be no incentive to get a dual core because games aren't multi-threaded apps. Also, no matter what you do, games can't really take advantage of it too well because most of what is processed is dependent on what was previously processed, or what is in the cache.

na dual core is definatly the future. i doubt they'll be making 65nm anytime soon.

I'd like to see the power consumption on a dual anything 90nm. All I can say is that cpu's are going to be a lot more expensive for not too much gain. You'll have to pay for those 2 in 1 cpu's you know. If you don't want to you can always pick up a single-cored celeron. But the fact is they'll have to go to 65nm to make it feasible. People aren't going to be buying these things for the desktop untill 65nm comes around.
 

hUMANbEATbOX

Contributing Member
Joined
Nov 17, 2002
ya, really, what's the point of waiting FOREVER for a dual core chip at 2x2.0ghz??

just build a duallie NOW, save yourself the hundreds of dollars and wait time...

2 cores at 2ghz = slow for todays apps games and OS's. (well, i suppose things like photoshop would like it, but like i said...just get a duallie).
 

Nightingale

Member
Joined
Dec 13, 2002
Location
Ohio
I am sure that there will be more support for SMP and dual core processors very soon due to the dual core processors coming out next year. I myself went with an duallie opteron setup because of the possiblity to buy these dual core opterons and pop two in my system for a quad (supposidely it should work that way according to amd). I think dual cores is going to be a turning point for how software is written and more multithreaded apps will be written. I was kinda surprised we wouldn't have already seen more multithreaded apps with intels HT which atleast simulates dual processors.
 

Captain Newbie

Senior Django-loving Member
madcow235 said:
They lowered the voltage in precotts and they are still ovens. 65nm isnt going to be easier its just going to be a bigger oven

Nono, smaller oven with a larger power/sq. mm :)

Games are heavily videocard intensive, while the processor is still important whatever is sitting in your PCI-E or AGP8 slot is more important. I think that multithreading is really going to catch on because of the new tack all the processor manufacturers seem to be beating to.
 

@md0Cer

Senior Member
Joined
Nov 6, 2003
Location
Denver, CO
madcow235 said:
They lowered the voltage in precotts and they are still ovens. 65nm isnt going to be easier its just going to be a bigger oven

Not necessarily, Prescott for some reason had relatively large amounts of power leaking through the transistor gates in the form of heat. I believe the 90nm version of the core for the Pentium M does not have this power inefficiency.

In general terms, usually the smaller the gate length of the transistors, the higher the clockspeed achievable on lower vcore, and less power consumption/thermal output. But, there have to be certain technologies to keep power from escaping through the gates like in Prescott. Intel tried a process called 'strained silicon' in the Prescott, I am not so sure how successful it was, but I hear it will be more successful for 65nm. Other examples would be AMD's Silicon On Insulator. I heard a rumor a while ago that to get to 45nm AMD might be developing with IBM and planning to use something called "nickel silicide" in place or of silicon, I am not very sure at all about that one though.

-0cer
 

NovaShine

Member
Joined
Nov 20, 2003
Location
Sydney Australia
@md0Cer said:
Not necessarily, Prescott for some reason had relatively large amounts of power leaking through the transistor gates in the form of heat. I believe the 90nm version of the core for the Pentium M does not have this power inefficiency.

In general terms, usually the smaller the gate length of the transistors, the higher the clockspeed achievable on lower vcore, and less power consumption/thermal output. But, there have to be certain technologies to keep power from escaping through the gates like in Prescott. Intel tried a process called 'strained silicon' in the Prescott, I am not so sure how successful it was, but I hear it will be more successful for 65nm. Other examples would be AMD's Silicon On Insulator. I heard a rumor a while ago that to get to 45nm AMD might be developing with IBM and planning to use something called "nickel silicide" in place or of silicon, I am not very sure at all about that one though.

-0cer

The prescott is the only so called 'failure' for 90nm. The Pentium M's transition to 90nm (Dothan) seemed to be quite successful and a good one at that. Lower heat, power requirements, higher clocks, lookin good to me.
 

Enigma422

Member
Joined
Mar 18, 2002
Location
The Parabolic Quantum Well
Alacritan said:
IBM is already starting production on 65nm processors, they just run at a lower clock speed.

Currently there is no production 65nm process. Intel is the only company that has sucessfully implemented a 90nm process, or so they are presenting that appearance to the public. As for AMD and IBM, they are having problems in implementing the 90nm process and their yeilds are not high.

@md0Cer said:
But, there have to be certain technologies to keep power from escaping through the gates like in Prescott. Intel tried a process called 'strained silicon' in the Prescott, I am not so sure how successful it was, but I hear it will be more successful for 65nm. Other examples would be AMD's Silicon On Insulator. I heard a rumor a while ago that to get to 45nm AMD might be developing with IBM and planning to use something called "nickel silicide" in place or of silicon, I am not very sure at all about that one though.

-0cer

Actually Intel has sucessfully integrated strained silicon into their 90nm process and is currently used in the Prescott core. Strained silicon is used to increase the saturation velocity of carriers in the channel region allowing for faster switching times vs. standard silicon transistors.
 

Mr. $T$

Member
Joined
Sep 15, 2001
I believe that strained silicon is also why the prescott is so hot, because the power can leak more easly.
 

dippy_skoodlez

Member
Joined
Mar 19, 2003
Location
In front of my computer
Enigma422 said:
Currently there is no production 65nm process. Intel is the only company that has sucessfully implemented a 90nm process, or so they are presenting that appearance to the public. As for AMD and IBM, they are having problems in implementing the 90nm process and their yeilds are not high.
.

AMD has leaked a few mobile AMD 64's 90nm :santa: They exist, just not too numerous.

k8D0.jpg
 

aNTiChRisT

Member
Joined
Feb 25, 2004
Location
England
It was my understanding that dual core proccessors will be able to tackle multiple threads (open your task manager -- you have 20 or so running threads) but i have no doubt that operating systems will spread the load out across the two cores. If they can make CPUs with dual cores, they can split threads for faster proccessing.

If, for example, you had a Dothan-based dual core @ 1.6 (comparable to a 2.4 p4) but dual cored, with successful load-splitting support that (for argument's sake) is 100% efficient -- that's a whooping comperable 4.8ghz on a 2x1.6ghz CPU.

Yes you can punch holes in that, but it's just an example. When dothan's hit desktop they will probably be 1.8 or 2.0 so it will be a step up from the 4ghz prescotts.

~t0m
 

Stratcat

Member
Joined
Feb 8, 2004
Location
Chicago - USA
Actually Intel has sucessfully integrated strained silicon into their 90nm process and is currently used in the Prescott core.

Yes, Intel has managed to produce strained silicon Prescotts. But in Prescott's case, "successfully", IMO, needs to be qualified. ATM, the CO's appear marginal, and the (barely available) D0's seem to be only slightly better.

Intel struggled long & hard w/Prescott, including many delays, revised TDP's, revamped VRM & FMB specs (after significant "Prescott Ready" mobo releases), a difficult and slow launch w/limited product availability, immediately announced PCN's for stepping changes before even cursory product availability of the release stepping was attained, and even the intro of a new system form factor (BTX), all while they tried to get a handle on the thermal issues.

I DO admit the current D0 Prescotts appear to be reasonably ready (if not yet commonly available) for the general consumer market, so I guess they MAY be called a qualified success, in that the ARE entering the marketplace. But if this was a successful implimentation, we wouldn't be waiting with bated breath for further D0 availability, not to speak of the as of yet unavailable E0's.

It's been one helluva' long tuff haul. I agree they will eventually pull it off. Probably soon. Manufacturing process control is a science (and art) of continuing refinement.

Strat
 

NicePants42

Member
Joined
Dec 19, 2003
"The prescott is the only so called 'failure' for 90nm. The Pentium M's transition to 90nm (Dothan) seemed to be quite successful and a good one at that. Lower heat, power requirements, higher clocks, lookin good to me."

Has anyone read the Dothan article over at Anandtech? According to that article, one of the main reasons that the Pentium M is so efficient is that each generation of the CPU is slowed down to as close to the target clock speed as possible, meaning that the ability to overclock the FSB is greatly reduced.

Not sure if all the efficiency will be a requirement for desktop dual cores, but making sure that chips can't be overclocked makes it easier to charge more money for faster CPU's, right? Soemthing to think about.
 

Moto7451

Senior Something
Joined
Feb 24, 2004
Location
LA, CA
They lowered the voltage in precotts and they are still ovens. 65nm isnt going to be easier its just going to be a bigger oven

Or in some ways a smaller one as the die sizes will shrink making it harder to cool. 90W on a 40cm^2 die is easier to cool than 60W on a 20cm^2 die. Its all a matter of surface area & economics. The smaller your die the cheaper a CPU is to make because you can make more of them with the materials available. However, when your CPU puts out too much heat & has too small a surface area you run into problems. When voltage starts leaking it makes your heat problem worse & is compounded by the smaller surface area.
 

NovaShine

Member
Joined
Nov 20, 2003
Location
Sydney Australia
NicePants42 said:
"The prescott is the only so called 'failure' for 90nm. The Pentium M's transition to 90nm (Dothan) seemed to be quite successful and a good one at that. Lower heat, power requirements, higher clocks, lookin good to me."

Has anyone read the Dothan article over at Anandtech? According to that article, one of the main reasons that the Pentium M is so efficient is that each generation of the CPU is slowed down to as close to the target clock speed as possible, meaning that the ability to overclock the FSB is greatly reduced.

Not sure if all the efficiency will be a requirement for desktop dual cores, but making sure that chips can't be overclocked makes it easier to charge more money for faster CPU's, right? Soemthing to think about.

I actually read that article before writing that post. But can u plz elaborate on the phrase:

one of the main reasons that the Pentium M is so efficient is that each generation of the CPU is slowed down to as close to the target clock speed as possible, meaning that the ability to overclock the FSB is greatly reduced.

What i got from the Dothan article was that the Dothan was built to a clock speed, not built to get the highest clock speed.