• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Moore's Law Dead??!!!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

syberspy9

Member
Joined
Sep 4, 2003
Location
Salmon Arm BC CANADA
Semiconductors: When it comes to designing chips, making them go faster is no longer the most significant challenge. Is Moore's Law dead?

IT SEEMS almost obligatory to begin with Moore's Law. The observation made in 1965 by Gordon Moore-that the number of transistors on a chip would grow exponentially-has proven remarkably resilient. (Dr Moore went on to co-found Intel, which is now the world's largest chipmaker.) Each time it seems that the end is in sight, a new technique is devised to make transistors still smaller and cram more of them on to a chip.
The first transistor, built in 1947, was a few inches square and half an inch high. In 1959, two groups separately realised that instead of painstakingly making transistors one at a time, many of them could be created at once by etching conducting pathways on a wafer of silicon, which had the necessary electrical properties.

This paved the way for the modern computer. Once the concept was proven, it was just a question of progressive miniaturisation-the pace of which Moore's Law so optimistically, yet correctly, predicted. Today, transistors are etched into an ultra-thin layer on a microchip, a square inch of which can contain many tens of millions of them.

The first computer chips were designed to execute a series of straightforward operations, one after another, such as adding together or comparing two quantities. The chip could also store results and retrieve them again from a separate memory, which also held the program it was running. Then, as now, the movement of data to and fro, both inside the chip and between the chip and the memory, was co-ordinated using an oscillating crystal, which sends out a periodic signal like a ticking clock.
Over the years, the frequency of this signal, known as the clock speed, became the key measure of processor performance. In parallel with Moore's Law, which predicts that the number of transistors on a chip increases exponentially, the clock speed has done the same, doubling roughly every 18 months from thousands of ticks per second in 1971, to millions in the 1980s, to billions today. But while optimists believe that this process will continue, chip developers across the industry now agree that clock speed will no longer be the key metric of processor performance, for several reasons.

Parallel lives
The first is the growth of parallelism-the practice of getting a chip to execute many different operations simultaneously. In the past, this was confined to the realm of high-end supercomputers, as a way of improving their performance. But it is now becoming common in personal computers, and is bound to become more so. As a result, the amount of processing a chip can perform with each tick of the clock will be just as important as the frequency at which the clock is ticking.

A driving factor behind this parallelism is the fact that, while processor speed has increased with such remarkable rapidity, the speed of memories has lagged. Marc Tremblay of Sun Microsystems, a computer-maker based in Santa Clara, California, says the gap between processor speed and memory speed is likely to grow. Parallelism within a single chip allows several different processing units to share the same memory, so the memory's slowness is not such a problem.

This is because the limiting factor is not so much the throughput of memory chips (the rate at which data can be moved in and out of them) but the administrative overhead associated with moving information in and out of the processor. Because of this, chip designers can gain by putting several distinct processors on the same chip, and have them share a fast, local memory inside the chip itself. This approach is known as multiple cores, or multi-core for short. A related approach is known as simultaneous multithreading. It involves modifying a single processor to enable it to switch quickly between several distinct tasks. While one task is waiting for data to arrive from the main memory, another can continue to execute-so a single processor can, in effect, do the work of many.

A second reason why clock speed will no longer be an accurate measure of performance is that distributing the clock's signal to all the different parts of a chip is more difficult than it sounds. Jo Ebergen, an engineer at Sun, says that reducing the "skew" on a chip-the amount by which clock signals might be out of synch-takes a very skilful chip designer. It is, he says, as much an art as a science. And it is becoming more difficult as chips get larger and more complex.

Tick, tock
That is why Sun is aggressively exploring "asynchronous" technology, which involves getting rid of the clock entirely. This approach has costs as well as benefits, since miniature circuits known as "rendezvous circuits" must be placed at circuit junctions to co-ordinate the flow of data. It is rather like replacing a city-wide network of traffic lights with policemen at every corner. But, says Dr Ebergen, in one recent experiment with a test chip that could run in both synchronous and asynchronous modes, the asynchronous mode won out. That is because in a synchronous design, every operation must wait for the slowest one to complete, while in an asynchronous one, a laggard only delays the local part of a calculation.
Clockless chips, says Dr Ebergen, also have the added benefit of emitting far less radio interference. So asynchronous circuits could be particularly useful in devices such as mobile phones, where radio interference is a substantial concern. Wilf Pinfold, the director of microprocessor research at Intel, points out that opinions over the value of asynchronous design are quite divided. But it seems clear that, at least in a portion of the market, it will become more and more important.
Finally, getting chips to run at higher clock speeds is diminishing in importance because another problem is becoming more pressing: getting them to consume less power. Fred Weber, the chief technical officer of AMD, a chipmaker based in Sunnyvale, California, says power consumption is now the biggest problem in chip design, for several reasons. The first is the growing prevalence of mobile devices, such as laptop and handheld computers. Increasing the battery life of such devices is in some cases more important than squeezing an extra bit of speed out the system.
A related problem is heat. Today's fastest PC microprocessors consume about 100 watts of electrical power, the same as a bright light bulb. But light bulbs get hot-too hot to have inside a desktop computer, and far too hot for the inside of a laptop. That is why desktop computers have noisy fans, and laptop computers are never as fast. Ghavam Shahidi, who works at IBM Research in East Fishkill, New York, predicts that high-end PCs may well come to rely upon novel techniques, such as water-cooling systems. But for PCs aimed at the mass market, fancy cooling systems will make desktop machines too expensive and laptops too bulky.
So designers are now striving to minimise the power consumption of their chips, with speed as an ancillary consideration. Dr Pinfold says one solution his team is exploring is to use multiple cores, switching from one to another not to increase speed, but rather to minimise the total heat generated. When one core gets too hot, it is switched off for a while to cool down, while another core takes over.

Into the third dimension
All of these ideas are already, to a greater or lesser degree, finding their way into existing microchips. For example, Philips, a Dutch electronics manufacturer, has already built a pager that uses asynchronous technology. Intel already sells chips capable of simultaneous multi-threading, and Sun's UltraSparc IV chip, launched in February, incorporates both multiple cores and multi-threading.

What next? One problem, says Dr Ebergen, is that as the performance of individual chips in a computer improves, the limiting factor on the system's overall performance becomes the "interconnect" between the chips. This currently consists of pins-little wires, essentially-protruding from the edge of the chip. Although the number of these pins has increased, it has not increased nearly as fast as the number of transistors. The result is that a chip with billions of transistors must make do with just a few hundred pins to communicate with its neighbours.
One solution that Sun is investigating is called "proximity communication". Rather than using wires to connect two chips, the chips would instead be placed very close together. A build-up of electric charge on one chip then induces a corresponding build-up on the other, by a process known as capacitive coupling. If Sun can work out all the kinks, this approach would increase the density of the interconnect, improving the overall system performance. Dr Tremblay says the technology may be available as soon as 2007, at least at the high end of the market.
Dr Weber, at AMD, sees another way forward. In the past, he says, the limiting factor on chip performance was the transistors themselves-getting them to be small and fast enough. But transistor design has progressed so much that, although further progress is certainly possible, the thing holding chips back today is not the transistors but the wires-the metal paths etched on to the chip-that electrons must travel along to get from one part of a chip to another.

One way of speeding things up would be to make chips in three dimensions, rather than on a flat plane as they are today. Dean McCarron, an industry analyst at Mercury Research, points out that this is already happening to some extent, since wires can cross over one another on a chip, though no actual transistors are stacked on top of each other. Even so, a true three-dimensional chip is still some way off, as the necessary etching technology is still in its infancy. Dr Weber acknowledges that there will also be design challenges to overcome. For example, heat dissipation from the interior of such a chip will be even more difficult to manage than with existing designs.

So does all this spell the end of Moore's Law? Its demise has, after all, been predicted many times in the past. There is no simple answer. On the one hand, it seems the law, at least as it relates to increases in transistor density, will continue to hold for some time. On the other hand, the law's significance is likely to diminish, as computer-buyers demand more than just speed from their machines-and chip designers tailor their wares accordingly.

And now time for a advil
 
nice copy-paste....

it does seem that technology is trying to move away from moore's law due to the troubles Intel is starting to see with the constant ramping in speed to keep up with moore. plus at this moment there is no need or killer app that really requires moore's law to continue.

could you at least cite the source of the article though.
 
i got it from this mag at the doctors office and i had to tiep it out !!! i went to there site to copy it but i would have to pay so i tieped it out it was printed march 13 2004 why u think i just put it on now !!

but its fom this mag called the economist
sorry about that just thought ppl would wana read this without paying for it and se what ppl would have to say about it it made me think in fact im still thinking about it it hard on the head
 
Good read.

I really don't know what is going to happen to CPUs.

And I don't care, just as long as they can do things faster. :D
 
well mabey if i could try a 2.8C oced to 3.4 then i might not need any more for a while

but i only have a 2.4 not even a C!!!!!
ive got a 2.8C on the way.

i think hd need to be running faster cpu are not the bottle neck here imo !
 
Mmm, they are getting a bit fast, but they also have a heat problem on the horizon... Good read, but it think that the title ios wrong, I would have caleld it, "Tic Toc no faster goes the Clock". As that is mainly talking about how clock speed is no longer going to be a big factor, companies are finding other ways of getting more speed out of their chips. But this does not means that the transitor count will stop going up. Two core cpus will have a lot more transistors. I used to be every 18 months but it is now 24 months. Transistors will keep going up maby not as fast, but they will keep going up. I am not to sure what will happen about clock speed, Itel seem to be ddoing what AMD are doing at the moment, mmore IPC and less clock cycles. Look at the P4m.
 
yes heat is a big thing but they will have to work on that soon once air can no longer do stock cooling they will fix that there is no way that they will over air for stock cooling once they have to put somthing like a sp-47 on with a tornado stock then they will work on the heat so im not worried about heat because there working on it but we have seen the the max of 800fsb 512 2mb 5k (i mean 5k !!! come on i was think along the line of 400min!!

the ghz is up and L3 is up the 512 and 5k have to jump up to 1mb and 512k
 
Moore's so called law is not dead . It encompasses much more than what some people think . Many people think that it just refers to the doubling of transistors on a processor which due to process and economic considerations may well die .

Many people have predicted the demise of this law due to the technical difficulties at various times but have thus far , all been proven wrong . So it is as usual premature to assume that the law's death is imminent . Furthermore at present many analysts are drawing their conclusions based on Intel's current problems with Prescott , but that means nothing , one manufacturer's problems with a particular method of achieving performance doesn't mean that another manufacturer will fail .

What is also important is that chip designers can do alot more than just use raw Ghz or mad increases in transistors , Low-K , SOI and strained silicon already demonstrate this . In fact simple methods such as widening or speeding up the frontside or processor bus have been used successfully too , while figuring out how to go to a new process or archietecture . The various increases in bus speeds with both Intel and AMD chips have done this already and seemingly will continue for a while . Not to mention , adding cache has been particularly successfull for Intel , except with the Prescott .

In the near future we will have 65 nm processes ( some are being made by IBM already ) and multicore processors , so I am not convinced that Moore's Law will fall anytime in the next 2-3 years . CPU performance will continue to increase , just not necessarily via fantastic clock speeds or transistor counts .

My $0.02 .
 
Things that I can see happening to processors for us:

Adaptation of RISC and 64-bit computing (a la PowerPC and PowerPC-64)
Shorter pipelines and slower clock speeds == less power required, faster, cleaner execution and lower heat :cool:
SMP may catch on a bit more, but that's waaay far off.

My $1 :D
 
I really dont see how moore's law is a law. Mean while the theory of continental drift theory is just that, a theory. But we see the plates move around all the time. But moore just observed that speed seem to double every 18 months and he gets a law for it.
 
Hence so-called law in my post. in reality it should be Moore's Observation , as even Moore's Rule might be a stretch . What likely happened is that after several years of holding firm despite predictions against it it began to be calleda law , and to this point is yet to be 'disproved' .
 
Yes its not a law as such as it has no scientific grounding as fact. His original observation (in the mid 60's) was that the number of components on an IC seemed to double on almost a yearly basis. This was then revised later (mid 70's) to be doubled every two years and only encompassed transisitors and became known as Moore's Law. Then in the late 70's he revised his theory again to the incorporate the term device complexity, processor power and costings. For some reason Intel and others state that Moore had said it was 18 months but he had said on a number of occasions that this is incorrect but this may be due to their interpretation of all his papers and merely added his name to it as recogniton?.
There have been various interpretations of his observation by different researchers - such as the doubling of processor power every 18 months, the doubling of computing power every 18 months and the price of computing power halving every 18 months. However each of these evaluations including the transistor count theory have been tested empirical and found to fail. At the end of the day this was never a natural law - rather an observation or rule of thumb which for a period of time did follow the predicted pattern but which no longer does.
 
I don't believe in that Moors's law as OC detective said. It is just a wild guess and prediction with no logic to it at all.

However, I do believe that The point of diminishing return pave the way for another revolution and I also believe in Murphy's law :D
 
hey i told you were i got it look im nottrying to be the bad guy it just somthing i tieped up and as u can see im not a pro but im made it up asap by posting who i got it from im sorry if any one is harmed by me tieping out somthing i didnt think up but i did post who it came from

as for the on going topic:
this so called law or observation IMO it will be dead now that prescott is out i think it is all going to finally jump up to somthing like a multie core or 64bit but i know that intel will jump up and blow all of us and AMD out the the water and are heads will spin

this law will be belly up
but there is no resone to ho hum about it because were not makeing them and we dont controll the things they make
the only resone i see it taking a big jum is because of the heat problims the small amount of oc u can get out of the new chips (EE and prescott) and there going to have to go to 64bit or/and multi core

there hitting a road block on the current 32 bit single core chip they have to make a new road its only time

TIME FOR THE P5!!!! (in two years min.) :(
 
Back