The CPU industy is working on 16nm chips to debut by around 2013, but how much smaller can it go? According to the smart guys, not much smaller, stating that at 11nm they hit a problem relating to a ‘quanting tunneling’ phenomena. So what’s next? Yes, they can still add core after core, but this might reach a plato by around 2020.
AMD’s CTO predicts the ‘core wars’ will subside by 2020 (there seems to be life left in adding cores as Intel demonstrated a few days ago, the feasibility of a 1000 core processor.) A Silicon.com feature discusses some potential technologies that can enhance or supersede silicon.
I know bandwidth is expensive though…
Power?
LOL, I blame it on the lack of coffee…
I see we got Mr.Cambridge in the house.
Anyway..A language nazi less in the forum would even be nicer.
Relax. He obviously misinterpreted “Power” as an adjective (as did I when I first read the headline) and “Power Computing” as a subject or object of the sentence. In this interpretation, the headline does beg for a verb such as: “What Will IMPROVE Power Computing…” or “What Will Power Computing PROVIDE…”
It was an honest mistake and not a snarky grammar comment. Grammar checkers aren’t fun in public settings neither are the snarky, sarcastic, name callers. At least we didn’t have a grammar checker here.
I must confess that I didn’t even think of “power” as a verb and was even confused by the title… to the point of deciding to read all comments before eventually writing a comment about a missing verb if no one did. So yes, you are very right about the honest mistake.
Absolutely! I was scratching my head over the headline, which makes no sense whatsoever. There’s a difference between someone who makes a fuss over a minor error, and people who think something should make sense. This headline fails miserably.
I know… who needs proper grammar when writing articles. Right?
We will reach a Greek philosopher by 2020? Weird.
…Sorry, couldn’t resist.
EDIT: Actual contribution to the thread:
I wonder, would this memristor technology only be good for AI and the like, or would it be suitable for more general tasks, like desktop computing?
Edited 2010-11-24 00:18 UTC
Gaaawd. Mr. Cambridge, Mr. Collins only one still missing, Mr Rogets. And I guess it wont be long before he also make a cameo.
But yes…you all probably meant it in a constructive critism kind of way.
More angry at myself because I proofread my articles about 20 times.
Anyway I speak Afrikaans (i.e Dutch 2.0)
More like Dutch 0.6.3a. Afrikaans sounds like… Toddler talk to us Dutchmen :/.
NO NO NO Tom:-)
I know Dutch people mention Afrikaans as the “Taaltjie” but you wont know what I mean until you speak it.
Hah, kind of. I was just making a bad joke. I could actually care less if it was spelled wrong. If you aren’t a native English speaker, the spelling of the word “plateau” probably seems pretty fucked up… And it is!
Unless you are a French speaker… in which case the English spelling is easy as it’s exactly the same.
The answer is yes.
Some times ago, if I remember well they talked about using memristors in some kind of nonvolatile memory that’s much closer to the speed of RAM than usual nonvolatile storage.
Desktop computing could obviously benefit from that : on activities that consume a low amount of power and require low screen refresh rates, like word processing and spreadsheets, we could imagine a kindle-like machine with an e-ink screen, having its power permanently off except in the event of a keystroke.
A computer that lasts a week on battery when doing some actual work… Wouldn’t that be cool ? ^^
Or we could imagine merging sleep and shutdown, having cold boot nearly as fast as resuming from sleep…
quantum tunneling. Not the best article, sure, but do change the body of your summary (http://en.wikipedia.org/wiki/Quantum_tunnelling).
From what I gather, it seems electrons for instance can always pass through any barrier as the probability is always non-zero, interesting…
Also seems tunnelling only seems to effect barriers 3nm or less (other than the odd electron that might get through (in theory))…
I guess we will have leaky CPU’s soon enough if we don’t come up with something else (which I’m sure we will).
I remember reading in BYTE magazine years ago that the maximum speed of a CD would be 10x. I remember thinking then how fast that would be…
Well, this is not the same as the cd-rom issue, interestingly enough.
If you reach the maximal speed which a cd-rom drive can spin at, the fight is over. There’s no way you can make a standardized optical data storage technology go any faster, you have to make a new optical storage medium with increased data storage density and thus reduce the need for a drive that spins fast. This new storage medium will be incompatible with existing drives, so its adoption rate will be quite slow.
With processors, on the other hand, once you’ve reached the speed limit of usual processors, all you have to do is to put several ones in the same chip. This way, you can reliabily claim that you have packed N times the usual processing power in that chip.
If normal people with tasks that don’t scale well accross multicore chips start complaining that they don’t get N times the performance, you can then blame the software developers for reaching the limits of the algorithmic way of thinking of the human mind.
Then, as N grows and CPUs can’t shrink any further, buses will increasingly become the bottleneck of computing performance. Issues with the speed of light limit and congestion in memory access will be more and more serious. So the hardware manufacturers will adopt a decentralized memory model where cores don’t even share memory with each other, basically becoming independent computers except for inter-core IO. The amount of software which can’t scale well accross multiple cores will grow even further. HW manufacturers will still be able to claim that they’ve reached a higher theoretical performance and that SW manufacturers are to blame for not being able to reach it.
Unless we find a new way to reach higher performance in normal software (not server software in the perfect situation where tasks are CPU-bound), its performance will stagnate. We’ll then have to learn again how to write lean code, or to design new programming models that work across the new hardware but imply totally different ways of thinking. Or we’ll create new CPU architectures which allow higher performance without improving silicon technology, but will take years to be widely used.
One thing is for sure : for the next decade, improvements in the performance of usual software won’t come from improvements in silicon technology. Actually, I think it’s a good thing.
@thavith_osn: As a human kind we only have “leaky” transistors. We never had any other kinds. Thus, leakage has never been solved: it is due to the manufacturing process on one side and on the other side it is due to the our understanding of quantum phenomena (not quanting phenomena, might I add, thus stating my original intent of pointing the wikipedia article, which in it’s own right is rather poorly written). The problem now is exacerbated by the power/heat you input/output for the retention you get/spread of information-entropy per bit per unit time. For example, for the next generation of FLASH cells you need 1 electron/years retention rate. I could have put 1 year, 2 year, 10 years: with any modern VLSI it still sounds, let’s say, a jot absurd.
One more thing worth noting: quantum phenomena are observed even at the 130nm node, there however, one just doesn’t care. However these phenomena do not just magically disappear at different scales. Quantum phenomena are in daily life, when one boils eggs or when tries to walk through walls. Quantum phenomena, however, become just improbable as in the latter case.
@Neolander: I agree with you. But you have to ask yourself: what is the “speed limit” of processors. I hope you are referring to a current technological “speed limit”. But even so. Why wouldn’t the same “lean” engineering be applied to hardware engineering thus alleviating the “stagnation”?
Thus I arrive at my point (at last). My view is that the bifurcation of the computational science demanded by the commodity-driven industry is the problem; sometimes it is less observable, sometimes it is more. I should note, however, that it is a natural bifurcation, an evolutionary one. It is a necessity stemming from the conceptualization of the creative process: language, grammar conceptualization per unit time for the successful creation of ye working thing that could be purchased. It is far easy to do it on “a sheet of paper” rather than on couple of million of transistors specially designed for a purpose. M?
But none-the-less, it is an evolutionary process, our technological advance. It cannot stagnate. It only stagnates when it is anthropomorphized in the context of “global economy”.
If we can’t shrink transistors due to the tunnel effect, nor make chips bigger due to electric currents (or light in recent designs from Intel) having a finite propagation speed, we’ll reach a maximal amount of transistors per independent processor.
If we continue to put transistors in the same manner inside processors, we’ll hence reach a speed limit.
If we put these transistors together differently or use them more efficiently, as an example by switching to a “leaner” processor architecture as I mentioned, we can reach higher speed. But it’s not due to improvements in transistor technology, in the way we cut silicon, or things like that. It’s a more abstract progress.
[/q]
Not sure I understand this part, and it looks cut off in the middle (“M?”). Can you please try to explain it differently ?
Actually the whole progress (although not as fast as Moore’s law predicts) continues due to decreasing transistor sizes and some system level techniques like integration, supply voltage gating etc.
Moore’s law additionally dictates that speed of single transistors should grow and their power consumption should fall but that has not been possible since 130~90nm nodes (~7 years ago) and only small amount of progress has been made since then.
This limit has nothing to do with minimum sizes or quantum effects – it’s simply due to the fact that we are no longer able to decrease the supply voltage due to the physics of transistors themselves. Transconductance of MOS transistors (in subthreshold range) is at around decade/100mV (and will never be better than decade/60mV of BJTs).
So if you want to have 5 decades span between Ion and Ioff currents (first for high switching speed, second for low leakage), a 0.25V margin to compensate for process variability and another 0.25V for putting the transistor in saturation&linear ranges (for Ion) you’ll wind that you need a gate driving voltage (and thus supply voltage) of around 1V. There is no way to decrease this voltage without cutting corners, that is compromising on either switching speed or leakage power.
Development in last 5 years was heading in the direction of higher quantity rather than quality. Making 2x faster CPU costs 8x more power? Well, let’s just make 2 CPUs, and flood them with cache memory, add whatever peripheries on chip we can think of, etc. That’s not as good as the Moore’s law but it is still pushing things forward.
How long ago you contributed Croc?
Last time you submitted it was some fud about rim’s tablet.
All of these super smart geeks in osnews forums but real submission is few and far between.
Making snarky comments on the sideline is so much more fun I guess.
It is indeed! Especially when people bite and start going back and forth, quibbling over nothing worthwhile … On the other hand there are also some good posts that do stimulate the imagination, or at least our thought processes in general.
Eh? I^aEURTMm cripplingly busy. Writing for OSAlert is an occasional thing when I can afford the time. The staff are supposed to edit and fill in gaps, we would like to give preference to user submissions than have to be the only ones being snarky.