• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

K10 too comlex for Intel to have fabbed at 65nm?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
The PII/PIII design was actually the P6.

Intel has two design teams, one in the US and one in Israel. At the end of the P6 era the US team developed the Netburst architecture to replace the desktop chips, and the Israeli team developed a new mobile architecture (Core) that was more similar to the P6 then Netburst.
 
I wonder how intel will use hyper-threading this time. The P4 version seemed more like marketing.
 
According to this doc, the multi-threading returns similar to the HT of P4. This will keep more parts of the cores active with two threads sharing the cores resources. We find that Intel gets to reverse HT before AMD does. This will no doubt drive up the power usage by doubling the workload in each core. We have to wait to see what real world results take place before we can judge it.
 
I wonder how intel will use hyper-threading this time. The P4 version seemed more like marketing.

I completely, 100% disagree with this statement. I had a 2.8GHz P4 as my work machine a couple of years ago and it really had trouble handling Eclipse, Firefox, Outlook, and other programs running all at the same time. Switching from one program to another took a noticeable amount of time and execution of the programs I was working on was longer than it should have been.

Then I found out that HT was disabled in the BIOS. I enabled it and it was like I had a new machine. Execution times dropped across the board and program switching was much faster and snappier. HT definitely was not marketing, it kept more parts of the chip active like Cajun said. It's excellent technology for multitaskers.
 
I completely, 100% disagree with this statement. I had a 2.8GHz P4 as my work machine a couple of years ago and it really had trouble handling Eclipse, Firefox, Outlook, and other programs running all at the same time. Switching from one program to another took a noticeable amount of time and execution of the programs I was working on was longer than it should have been.

Then I found out that HT was disabled in the BIOS. I enabled it and it was like I had a new machine. Execution times dropped across the board and program switching was much faster and snappier. HT definitely was not marketing, it kept more parts of the chip active like Cajun said. It's excellent technology for multitaskers.

The problem was the P4 core was a poor design and had problems filling the full pipeline thus not running at full potential. HT was actualy a good idea but it was a way to regain lost speed from a bad design. So what you saw after turning it on was how the chip should have run in the first place. I'm really not sure how well a nehalem will benefit from this as I do not see it having the same short comings as a P4.

If the chip can keep the pipeline full than there is no need for this hack. Just because it can run two threads at half the cores power means nothing to me. I'd rather have each thread use the full power of each core.
 
The problem was the P4 core was a poor design and had problems filling the full pipeline thus not running at full potential. HT was actualy a good idea but it was a way to regain lost speed from a bad design. So what you saw after turning it on was how the chip should have run in the first place. I'm really not sure how well a nehalem will benefit from this as I do not see it having the same short comings as a P4.

If the chip can keep the pipeline full than there is no need for this hack. Just because it can run two threads at half the cores power means nothing to me. I'd rather have each thread use the full power of each core.

it was a very inefficient design for one that relied on high clock speeds to make up for the long pipeline it had. for intel to increase clocks they had to increase the pipeline. as they did with netburst, if i recall right from reading the other day. Netburst started out as a 21stage pipeline by the end of its life "netburst" with the prescott core grew to 31 stages. the only way to increase the cpu's efficient, of a single core "netburst" cpu. if the design was efficient in the first place intel would have never needed HT to start with.

have you read about "Atom" yet? HT has benifits, it just needs to be used on the right arch. as in what happen with netburst, atom is different however. in that it is a short pipeline but does in-order instruction execution. HT for a cpu arch like that will help with the cpu efficiency of doing work per clock. as if one cant be done another one gets worked on with HT. though not all "Atom" cpus will have HT. the ones that wont will be Dual core aka diamondville, aka silverthorne with be the single core HT enabled "Atoms"

if "HT" is a gimick then "Reverse HT" must be a bad joke.
 
Anyone who's seen and worked with the Chart (IBM) processes knows that by "too complex", Intel means "horribly written and asinine". And who exactly, did they have a "conference call" with? I bet you some marketing puke or vp. I highly doubt they talked to any decent engineers, though considering they (Intel IIRC) had an engineering manager in charge of mismatch who didn't even frigging know it would be a good idea to use dummies on differential pair diffusion groups, it could be possible. The only other way I would think Intel couldn't use that process is because they're simply not set up for it.

I'm telling you guys right now, it isn't about how awesome duper certain digital processes are. They're all extremely close (with like sizes). Dimensions vary here and there of course, and they do have their characteristics for yield and parasitics, but one doesn't destroy another intrinsically. It really comes down to the talent you have at the time. If your circuit designers/mask designers/cad groups are dummies who would rather collect paychecks than use their brains, you'll end up having a crappier chip, poor yield, basically wasting that 3 or 4 million bucks for tape out.
 
Last edited:
Nehalem is based on revamped Core m-arch so it's hardly logical to say it resembles K10 - unless all you understand about CPUs is the # of cores, cache levels and things like the existence of IMC.

AMD does use a 45nm IMMERSION lithography process which is fairly highend.
Immersion litho is more complex and more defect-prone - thus way more expensive - than dry. Immersion is technically inferion - there's no good qualities to gain in immersion except that sub-45nm necessitates immersion because dry method is not capable of the resolutions required.

Maybe theres hope for them yet come Bulldozer (if they stay in business that long).
FYI: AMD has officially removed Bulldozer (page 16) from their roadmap back in 2007. It's not coming atleast before 2010.

Whatever happened to AMD's reverse Hyper Threading tech they talked about a few years back?
"R-HT" was invented by no-other than theINQUIRER. AMD has never hinted about such tech.

AMD owns 3 Fabs, one in The States and 2 in Germany.
FAB30/38: offline - 65nm / 300mm transition
FAB36 - online - 65nm production, 45nm pilot
There is no 3rd AMD fab.
 
Last edited:
have you read about "Atom" yet? HT has benifits, it just needs to be used on the right arch. as in what happen with netburst, atom is different however. in that it is a short pipeline but does in-order instruction execution. HT for a cpu arch like that will help with the cpu efficiency of doing work per clock. as if one cant be done another one gets worked on with HT. though not all "Atom" cpus will have HT. the ones that wont will be Dual core aka diamondville, aka silverthorne with be the single core HT enabled "Atoms"

if "HT" is a gimick then "Reverse HT" must be a bad joke.

The way Atom sounds like it will work, is the second thread does not get CPU time until something goes wrong and the cycle was going to be wasted. It sounds kind of like error correction, but if it does work that way than it should not be marketed as having a second thread per core as this gives a false sense of CPU power. It does not run like it has 16 cores it just makes the 8 a little more efficient. It also sounds like if you have a 16 thread app running than it will bog down the CPU vs just running 8 threads with the same app.

AMD has said that Reverse HT was a rumour started and they have no intention of doing any such thing. So yes, it is a bad joke.
 
it was a very inefficient design for one that relied on high clock speeds to make up for the long pipeline it had. for intel to increase clocks they had to increase the pipeline. as they did with netburst, if i recall right from reading the other day. Netburst started out as a 21stage pipeline by the end of its life "netburst" with the prescott core grew to 31 stages. the only way to increase the cpu's efficient, of a single core "netburst" cpu. if the design was efficient in the first place intel would have never needed HT to start with.

The whole idea of Netburst was to eventually achieve a 10GHz CPU. That's what Intel's goal was when they started the project. Obviously they had no idea how power consumption would ramp up as clock rate did, but I digress.

In order to have a high clock rate, you MUST have a long pipeline. This necessarily makes the processor less efficient in IPC (instructions per clock) because the clock rate is so freaking high there's relatively few instructions getting completed every tick of the clock. But, because the clock rate is so high, that shouldn't theoretically matter, because the clock is ticking so fast, IPC should become irrelevant.

However, Intel did find out that their architecture was pretty inefficient, mainly because of their apparent inability to keep the pipelines full. I don't believe this was because of a fundamental flaw, but due to the inherent inaccuracy of branch prediction technology. Even now, we don't have perfect branch predictors. Back when the P4 was Intel's main chip, if their predictor was wrong, they had to flush a 31 stage pipeline. That's a LOT of work that needs to get undone and redone, making horrendous inefficiencies if a branch is incorrectly predicted. So then for a long time, while the pipeline filled back up, large portions of the logic would be sitting there idling doing nothing (the logic further down the pipe). THIS is why the pipeline wasn't full all the time, not because Intel LOL COULDN'T FILL THEIR PIPEZ. I hear that argument all the time and have NEVER once heard an argument that makes a stance other than the one I just made.

I wouldn't necessarily call Netburst a bad design. Intel learned quite a bit about branch prediction technology by doing so and learned firsthand the limitations of CPU power consumption. They new going into the project that they were going to have a lower IPC than a shorter pipelined machine, but they were hoping to make up for that in clock rate.
 
Nehalem is based on revamped Core m-arch so it's hardly logical to say it resembles K10 - unless all you understand about CPUs is the # of cores, cache levels and things like the existence of IMC.
The similarities com from sharing of X86 technologies, underneath all that technology is totally different. I even tried to compare Die pictures from earlied products and the all just look different.

largon said:
Immersion litho is more complex and more defect-prone - thus way more expensive - than dry. Immersion is technically inferion - there's no good qualities to gain in immersion except that sub-45nm necessitates immersion because dry method is not capable of the resolutions required.
I think you were trying to say inferior. I don't think so. At this time it is new tech but a lot has been invested in it to make it work efficiently. All the FABs dealing with 65nm and lower are facing the issue of how to mask and infuse at these low nanometer dimensions. Wet immersion allows the light to stay tighter on the circuit. Who knows, maybe a compressed gas would work better.
largon said:
FYI: AMD has officially removed Bulldozer (page 16) from their roadmap back in 2007. It's not coming atleast before 2010.
This one did disappear from the marketing pitch at least. Getting Barcelona and Shanghai out became more important.

largon said:
"R-HT" was invented by no-other than theINQUIRER. AMD has never hinted about such tech.

FAB30/38: offline - 65nm / 300mm transition
FAB36 - online - 65nm production, 45nm pilot
There is no 3rd AMD fab.

We still use the term loosely but can it be done?
The closest thing to it running on AMD and where INQ may have picked up the idea is with core optimization where processes move from core to core on a rotational basis. It's a very small speed boost allowing more pipelining but mainly keeps the processor cooler by moving the load around. Heat dissipation is based on a time factor with more time it dissipates more.

I think AMD sill has a small test FAB in the California office.
 
Last edited:
Watch out, if you start correcting other people's spelling they might do the same to you... ;)

All-in-all Netburst wasn't that bad (IMHO), the later 65nm steppings at least made up for the long pipeline by running at impressively high clock speeds.

The only third Fab I know about is the forest in NY - maybe they could start selling carbon credits from it.
 
Watch out, if you start correcting other people's spelling they might do the same to you... ;)

All-in-all Netburst wasn't that bad (IMHO), the later 65nm steppings at least made up for the long pipeline by running at impressively high clock speeds.

The only third Fab I know about is the forest in NY - maybe they could start selling carbon credits from it.

Not so much an attempt to correct but to be sure that was the word he intended. "Yes, I finger up few myself". If you look on Fab sites you find a lot of odd names for the companies that make the tools and equipment.

"Let He/She who is without spelling errors cast the first dictionary" :D

AMD/IBM have plans through 22nm http://www.fabtech.org/content/view/6166/69/
 
AlabamaCajun said:
I think you were trying to say inferior. I don't think so. At this time it is new tech but a lot has been invested in it to make it work efficiently. All the FABs dealing with 65nm and lower are facing the issue of how to mask and infuse at these low nanometer dimensions. Wet immersion allows the light to stay tighter on the circuit.
As I said, immersion litho is the "necessary devil". There's nothing to gain apart from the obviuos that it makes sub-45nm just easier. Intel does 45nm with double patterning (without immersion) - AMD-IBM 45nm is immersion...

I think AMD sill has a small test FAB in the California office.
There's no r&d activity in Cali:
Sunnyvale, California - Business Operations
Santa Clara, California - Business Operations
San Diego, California - Sales Office

http://www.amd.com/us-en/Corporate/AboutAMD/0,,51_52_502,00.html
 
That should have been Austin, but Fab 25 shutdown a few years back. R&D if in Colorado.

I doubt Intel will be using it's vision of double patterning much longer, yields are too low at 45nm. They too are going to be looking at a process that runs as low as 22nm. Fabs are just sitting in wait for these newer lithography machines faster than they can be developed. Have we seen the clocks stop for a brief time enough to learn the new tech?
 
Back