“Moore’s Law is maxing out. This is an oft-made prediction in the computer industry. The latest to chime in is an IBM fellow, according to a report. Intel co-founder Gordon Moore predicted in 1965 that the number of transistors on a microprocessor would double approximately every two years – a prediction that has proved to be remarkably resilient. But IBM Fellow Carl Anderson, who researches server computer design at IBM, claims the end of the era of Moore’s Law is nigh, according to a report in EE Times.
Exponential growth in every industry eventually has to come to an end, according Anderson, who cited railroads and speed increases in the aircraft industry, the report said.
I was so tempted to go and find every link over the last 6 years where Intel/IBM said “oh its ending now” only to have some breakthrough and have it keep going
it will end when it ends, and when it does we either need to start making programs that utilize what we have more efficiently (though processors are so powerfull these days that is allows coders to be lazy) or we need to switch to quantum computing/laser HDD’s/_______(insert future tech here)
It sounds an awful lot like “She canno’ take no more capn’! She’s breaking up!”
In a way, Moore’s Law as it had been practiced for about 30 years ended in 2003. Up until that time a relatively simple formula was used to resize the features, the doping, and the voltages in a consistent way, that allowed old designs to be easily cloned and made faster, smaller, lower power, better in every way. But when the formula got to the point where transistor gates were a few atoms thick, leakage power went through the roof, and scaling changed. You could still make feature sizes smaller and get more on the chip, so we still had Moore’s Law in one sense. But you had to either let power go through the roof or back off on speed, and more and more complex tricks had to be used to keep scaling going. And state-of-the-art fabs cost billions (yes, with a B) to build.
That’s why the processor vendors are selling multicores. No matter that we don’t really know how to program 1000-way parallelism, that’s what we’re getting. And even with the new tricks, atoms don’t scale. Sure, we can go 3D, but how are we going to cool a device like that?
We’ll see a couple more doublings, maybe more, but they will be much harder to achieve, and unless there’s really a market for the resulting chips, who’s going to bother?
Moore’s Law.
I mean, why they called it a law. A law is something that is real and proven. A theory is something unproven. I always thought it should be Moore’s Theory (or Moore’s Observation)…
I’m sure it was a little tongue in cheek, but anyway…
Your distinction between Law and Theory is not true. A theory is a systematic explanation of something which may have a body of evidence confirming it (if it’s an accepted theory it pretty much must have a solid grounding in evidence.. a rejected or half-baked theory has incorrect evidence or no evidence whatsoever).
There is no such thing as a ‘law’ in scientific discourse, so the word ‘law’ is really a qualitative label that people put on theories or ideas which they think are solid. Or the word law can be used in another sense as more of a rhetorical device. In this case, Moore’s Law is more of a rule-of-thumb than an actual theory even.
From everything I’ve read and the interactions I’ve had with professors in the microelectronics field, designing a commercial chip is as much art as a science. Given the competition in the field and the high degree of specialization of the various people involved in producing a chip, everyone looks for rules of thumb, industry consensus, generic simplified models, etc, to help figure out what to build, how to build it, and what design techniques are likely to give good results.
I don’t remember who said this, but one of the senior members of a chip company compared building a chip to Russian Roulette: “When you start building a chip, you pull the trigger… you find out five years down the line whether you’ve blown your head off.”
Hm, I think, that you^aEURTMre wrong. Something, that is unproven is a hypothesis, not theory. So, so called ^aEURzMoore^aEURTMs law^aEUR should really be ^aEURzMoore^aEURTMs hypothesis^aEUR ;P.
Newtons law of gravity doesn’t deal with relativity and thus scales beyond speed of light.
That doesn’t make it worthless or incorrect, it just looses precision beyond a certain point.
Law isn’t a defined term in science and you are mixing up Hypothesis and Theory.
Disagree. Scientific theories are facts supported by mountains of evidence, sometimes over centuries (evolution, for example). To say something is a theory — in science — means it’s waiting to be disproven and no one has done it yet.
Yes it should really be Moore’s rule of thumb.
How about Moore’s Opinion?
j/k I think Moore’s Thoery is the right wording.
Edited 2009-04-11 05:47 UTC
Once upon a time, one of my professors characterized a law as a hypothesis/theory/SWAG expressed as a mathematical statement. As such, Moore’s Law is a valid term.
http://en.wikipedia.org/wiki/Wirth‘s_law
I thought we lost it some time between the later Pentium 4s and Core? It seemed to me like it took Intel a long long time to scale their P4s from 3Ghz to 3.6GHz – a mere 20% improvement, with a resulting increase in heat.
It really has nothing directly to do with processor speed. Doubling transistors just usually meant more speed. Transistors are still being doubled approximately every two years. We are just getting more cores now instead of higher speeds. This doesn’t negate Moore’s Law as it says nothing about speed.
As an actual chip guy, I have lived with the law (and retired from it too) so I’d say it is fair to call it a law although it is no more a law than many other uses of the word. If we can have Murphy’s nonsense laws, and Wirth’s or Reiser’s law (the software reverse of Moore’s law), then we are really observing something that always seems to be quite true up to a point.
Even in English Law, their are many bad uses of the term too, many petty acts which have ridiculous consequences are still on the books from olde tymes!
It is worth reading up on the Nehalem chip to see what Intel has really changed in the design of the core 2 architecture internals to the i7. These relate not so much about processor architecture but more to do with logic and circuit design. The older P4 and core families used a domino type of circuit design that would obstinately use more power than necessary and did not produce the expected speed improvements mainly due to the need to have an extra Vt worth of power supply to keep SRAM working with 6 transistors. The entire industry has used 6t ram cells since the earliest days with discrete bistable circuits. Intel had the guts to change that to 8 transistors and that allows one Vt to be cut from the power supply across the board and now domino circuits give way to pure old time symmetric cmos circuits. While these are not faster, they do scale much better and can use much less power. The consequence of that is that we are now mostly on a massively parallel processor path and much less on speed path. As long as Intel can sell higher level multi cores, Moore’s law is in effect as long as Intel can scale the processes.
I totally agree with you.
GUIs and the likes usually can be handled in one core easily, and the number-crunching stuff needs to go multi-core as fast as possible.
But that is in itself a problem. How many algorithms really scale well with adding cores to it?
Graphics, yes, but matrix inversion is a problem to get good scaling for.
I’d like to read more about this. Would you kindly point me to any papers in public or in the journals about Intel’s transition away from dynamic circuits?
Well thats easy, I lost the link, but google around for Nehalem, domino, 8t, static cmos, etc. I recall reading the article on Toms Hardware when i7 first came out.
Myself, I was in the market to build a quad core OSX box and the sales idiot in Microcenter kept on pushing to buy the latest and greatest Intel i7, saying it was 3x faster etc etc. An an engineer I don’t like that kind of dumb oversell but was curious to know why Intel had to change the whole thing again. The core processor architecture doesn’t change much, but they finally caught up with AMD on memory interface onboard. The new design methodology changes the game to better support more low power cores on chip and using more parallel wide logic than deeper faster logic. It is like a return from quasi nmos-cmos right back to real cmos. I believe AMD was also more of the wider slower does more logic than the fewer racier circuits that Intel going back to the early Athlons so Intel and AMD concur again.
Anyway OSX runs fine on regular quad core and not ready for i7.