Vernor Vinge, 62, is a pioneer in artificial intelligence, who in a recent interview warned about the risks and opportunities that an electronic super-intelligence would offer to mankind. Vinge is a retired San Diego State University professor of mathematics, computer scientist, and science fiction author. He is well-known for his 1993 manifesto, “The Coming Technological Singularity”, in which he argues that exponential growth in technology means a point will be reached where the consequences are unknown. Vinge still believes in this future, which he thinks would come anytime after 2020.
How fitting…
I wonder where he got that year…
Browser: Opera/8.01 (J2ME/MIDP; Opera Mini/3.1.7139/1662; nb; U; ssr)
I believe he got it (get ready)… in hindsight!
I can feel my average post score tanking right now.
IMHO, I think computers will never learn how to speak…. I mean, speak like people do (learn, adapt, sometimes destroy language and so on).
If they can apprehend the T800’s neural net processor, it can very well happen.
Computers’ ability to conceive and explore possibilities is limited by their capacity for parallel execution of branches within a thread. The critical piece we’re missing is a scalable way to distribute prediction and speculative execution across discrete processing elements. Our fundamental understanding of the logic behind neural networks is significantly more advanced than our current processing technology. However, our technology seems to be on an evolutionary course toward an architecture that will enable artificial intelligence.
I am of the belief that we are closer to understanding the basic mechanism of the various evolutionary branches of biological intelligence than many people think. Matching the intellectual capacity of a mammalian brain via digital computing may be several decades away, but it is not several human generations away. Computer networks of human processing elements shall serve as a template for how to interact with our silicon-based counterparts in the future.
Societal barriers to progress, such as the popular movement to discredit the theory of biological evolution made especially evident in the U.S. Republican debate, are deeply troubling. Our species’ apparent inability to accept the possibility of a superior yet imperfect agent is a dangerous ideology as we continue to consolidate our collective knowledge and consciousness by means of our own creations.
Are we under God, are we gods, or are we mere clumps of stardust? Does it even matter?
“Our fundamental understanding of the logic behind neural networks is significantly more advanced than our current processing technology.”
In theory; and when we can apply our scientific theorems we can test whether our understanding of Neuroscience and their Nets are what we logically have surmised them to be.
If all else fails, we’ll have learned much about the the brain by being wrong.
“I mean, speak like people do (learn, adapt, sometimes destroy language and so on).”
Language is never really destroyed, it is more the case that people change and language changes with people.
According to my wife, they already speak better than I do. Hmmm…
It just depends on what you mean by “speak” and “language”. A smattering of morphology and syntax of some Indoeuropean and non-Indoeuropean languages and their history is just enough to understand no computer can speak anything remotely similar to them. As for artificial languages such as Esperanto (and its derivatives like Ido) and Volapük, I don’t know: they’re more like fixed codes based on natural languages and that might work somehow.
But that can only happen if you dont run out of oil first and return back to the stone age (sort of speak).
Seriously, the alternative energy garbage isn’t gonna work when you have to use fossil fuels to generate it or in the case of Ethanol, arable land currently used for growing food. We’ll all starve to power our cars and make the energy to power robots? How flattering.
Not really, there is a development here in the UK that is using used cooking oil to convert to a car fuel.
It costs lb300 to get your engine converted, but then the fuel is almost free.
You can either buy is at a petrol station, where it costs around lb0.15 per litre, or find a friend in a chip-shop who will supply all the free cooking oil you need.
There is two problems though. Firstly, we will all become fat trying to keep up the demand of used cooking oil, secondly, the oil has to be filtered to remove old bits of sausages and fish, or car pollution will be smelly.
Oh, one last thing, this oil burns clean and produces steam as a by-product.
This has been a serious post in case anyone was wondering….
http://www.boulderbiodiesel.com/biodiesel/index.jsp
“Oh, one last thing, this oil burns clean and produces steam as a by-product. ”
hmm… you’re confusing with hydrogen.. vegetal oil is an hydrocarbon, burning it also produces carbon oxydes and soot. Moreover, using vegetal oil on a big scale would need intensive agriculture wich also polutes the soil.
Bio fuel seems nice, but wind and solar electricity is the only durable solution imho
and fun too:
0 to 60mph in 4 seconds, 0.02$ a mile
http://www.teslamotors.com/index.php?js_enabled=1
sorry for the offtopicness
Ah, considering that the initial post is not about an OS per se, but rather highly theoretical impacts of advanced computing power, it’s very forgivable. I love the Tesla project myself, and if I had $100,000 (US) tucked away somewhere, I’d be very tempted. I just hope that when the Tesla roadster generates profits, the company will also manufacture something closer to the Smart forTwo that I can actually afford. Two pennies per mile sounds way sweet.
Further off-topic, but in line with a few of the posts that I saw here: what if global warming isn’t all about the gases? What if it is also a result of several million petroleum engines generating far more heat than actual motive energy, hundreds of millions of miles of black tar roads converting sunlight into heat, and even the (albeit currently in trivial amounts) the conversion of wind and water power into, well, heat? I would like to see a study done on cars that determines how many thermal units per mile they crank out, myself. Okay, back to the show.
Uhm, alternative energy do work, there just not competitive with the price of current oil-generated energy, as oil will run out, alternative energy will become competitive.
As for the choice between using land to for energy production or for food: there is exactly the same choice today using the load for direct or indirect food production: either food for humans or for animals eaten later by humans.
Given that meat production has low efficiency, if we follow your logic we shouldn’t eat meat either..
That said, it’s true that if we manage to build a nanotechnology industry able to cover all the roads with solar cells before we run out of oil, the transition will be much more easy.
its also a question of life style. more people well find the need for smaller, or even shared vehicles and so on.
given the net, one can, rather then go out shopping, have stuff sendt home as needed. stuff like home offices that today are seen as a interesting idea but never really deployed en mass will find its way into more and more homes.
as in, rather then transport people back and forth, transport information back and forth.
yes it will look like a step back, at first. but unlike when the best way to transport info was a folded sheet of paper inside another sheet of paper, we can now send whole books around the planet in seconds.
remember, its the moving of mass over long distances thats using most of the fuel. unless we replace the jumbo jets with more effective designs like lifting bodys or even modern versions of the zepelin blimp future travel will be a area of the rich and famous, not the avarage working family going on a holiday. for that the jumbo is in many ways to crude. it uses direct application of force to get anything done.
all in all, our current lifestyle is built around being able to just apply powerful engines to any problem, solving them instantly.
It should be noted that Vernor Vinge is also a science fiction writer.
>>It should be noted that Vernor Vinge is also a science fiction writer.<<
Yeah? So? The fact that he has written a couple of sci-fi stories should automatically discredit him or this theory?
It’s not as though he made this up for the plot of a science fiction novel; heck, it’s not as though he even came up with the idea on his own – the idea pre-dates his essay by a couple of decades.
Read up on the subject – it’s quite interesting. I would highly recommend Ray Kurzweil’s book on the subject. That is unless, of course he should be discredited for collaborating with Our Lady Peace on their Spiritual Machines album. Such frivolity.
–my .02
Hey, I happen to like that album.
2020 is way too soon.
Discredit? Quite the opposite, in my opinion.
“Your computer is trying to become self-aware, cancel or allow?”
Duly noted. It all makes for entertaining speculation, but computers are, fundamentally, tools. As such, they require direction – guidance for application of their great power. Complexity in the tool reduces our need to interface with the tool while it accomplishes the desired task but elimination can not happen, just as dividing by two will never reach zero, although one can get very close over many repetitions.
In this regard, as computers advance, I would say singularity is never achieve, but rather integral advancement, albeit with diminishing retunrs with each iteration. Over time, it will appear that the machine is smart, but it really will not be any smarter than Mr. Coffee x2x2x2x2x2x2…..
guidance? sounds like your talking about a kid to me…
hmm, scary path of thought that is…
sigh… allow.
ERROR! The Operation Completed Successfully!
To be quickly followed by (or even proceeded by) Steve Jobs:
“The iRobot*. It just works.”
*not related in any way to fictional works by Isaac Asimov.
But seriously, I think that it’s a possibility, but speculating on the resulting situations is just that; speculation.
Putting a hard date on the Singularity
I don’t think you can predict the Singularity with any degree of reliability. It will happen when it happens and no sooner. It could be next week. It could be 100 years off.
Human innovation is weird like that
The Singularity will come when the Messiah comes.
ain’t gonna happen
All my life, some ‘futurist’ has been predicting great things in 10 or 20 years. Most of them had said computers will be smarter than man. . .
There’s no colony on the moon. I don’t drive a flying car. My computer doesn’t speak English. . .
Someone should teach these ‘futurists’ how to read history.
Every technology has gone through the same curve, moving from an expansive rapid discovery phase to a a much slower development phase to final maturity. Software seems to have hit that peak about 10 years ago, as did biotech. It’s all just gradual improvement from here on.
It largely depends on how we decide to invest our resources. If we decide to continue our self-destructive ways, then you’re absolutely right. If we decide to cling to our theory that free markets maximize the incentive to innovate, then you’re absolutely right.
The fact of the matter is that we’re burning through our intellectual capital, and it’s running out. We’re effectively resting on our laurels, productizing ideas that took root during a period characterized by blue sky research funded as a result of forces that no longer exist.
It can’t continue, and it won’t. Globalization and the viability of intellectual property are changing our economics, and the businesses of today are charting a course toward unprofitability. They will cry out to the governments for help, and the response will speak volumes about the relevance of the State going forward.
The future I envision is dominated by a distributed, ad-hoc approach to resource allocation driven by connectivity. In a fairly recent post I identified the FOSS development model as just the beginning of a new approach to prioritizing outcomes and funding solutions. This represents a de-capitalization of labor that seems necessary to support the rapid increases in the efficiency of creative output.
This result is not without its challenges. How do we ensure the continued availability of tangible and service commodities in a society devoid of a traditional asset-based entitlement mechanism? It can’t just be about creating and exchanging information. These physical and localized commodities have intrinsic value, and it is unclear how an information-based economy parcels them out to members of the information class.
My theories may not hold water, but your’s seem to lead to the collapse of our society. We won’t let this happen. Some sweeping reform movement will intervene, and its success or failure will seal our fate.
Blaaahh…
Fair enough. May I ask what you find so ridiculous?
Hello Goodbye Karl Marx.
More of it… *sigh* Homophobia in the old sense (latin homo and greek phobia (irrational fear of mankind) – notice: modern usage is based on greek homo (same)).
Doomday is coming oh dear – actually society has never been so capitalized as today. The field has just been leveled so everybody can compete. We are all capitalists today
No. It is just an old way reawakened because of easier communication, thanks to the free market.
There is no de-capitalization. Au contraire! FOSS is free market capitalism at its highest. It is true hardcore capitalism as it was meant to be. What has happened is merely that production tools has become much more available leading to a situation where everybody can compete and profit directly from their own work. Complete capitalization of the work.
FOSS is not a challenge or a threat per se. It is merely a different production mechanism and does not pose a threat to the free market or existing economic models. It has however increased competition so you are going down if you cannot adapt. Free market at its highest. One can no longer charge an arbitrary price but will have to face increased competion from common man, because common man has become an independent super capitalist
You’re latching onto my incorrect use of “free market.” What I envision is certainly free market, but it’s not capitalism as we know it. Marx would roundly reject giving the public complete control over resource allocation. Marx would be anti-Internet in general. Similarly, I have more faith in mankind than I do in big business or the government.
Just because we’re so capitalized today doesn’t mean that this is a successful and sustainable approach. Most of the highly capitalized nations are in quite a bit of debt to the nations that are the least capitalist. The stock indexes are reaching record highs to compensate for the rapid depreciation of cash. We have what is effectively an inflation tax that only affects the working class.
We can’t win at this game. The rules are stacked against us, and it’s getting worse. Anyone who works for most of their income, anyone on a 1040 or 1099 in the U.S., is getting shafted. Big time. The only way to beat the system is own or invest heavily in businesses, and the only force keeping all businesses from consolidating into one is anti-trust regulation. Wealth is consolidating, and the working class is in trouble. But all is well in capitalism, right?
And I’m not really criticizing the idea of capitalism in general, but our disassociation of money from anything of value. It just floats on the market–from the poor to the rich. If we want to get anywhere as a society, if we want to accomplish the kinds of technical feats discussed in TFA, then we need to value labor, and we need to specifically value the creation of knowledge. That funny money stuff is just buying us time.
Edited 2007-05-07 09:09
I think you’re confusing “free enterprise” with “capitalsim” here. FOSS is hardly capitalism (hardcore or otherwise) since it requires no capital to produce. The notion of capitalism is based on a capitalist (i.e. someone with capital) investing his capital in order to fund an enterprise that will produce goods/service, from which he will receive a share of the profit. Workers provide their fruit of their labour to the capitalist in exchange for money (in effect, he is buying their capacity for labour with his capital, and owns the result of their work).
With FOSS, you can produce/distribute goods with *no* capital. By cutting out the role of the capitalist, you are in effect no longer working within the framework of capitalism…
Not that this is not a socialism vs. capitalism argument… I am well aware of your pro-capitalist inclination, just as you are of my pro-socialist bent. However, by circumventing the role of capital, FOSS is in effect beyond the range of this age-old argument.
BTW, society today is *further* from traditional capitalism as it has ever been. In the U.S., the vaunted paragon of capitalism, we have a system of state-subsidized industries through the Pentagon system (hi-tech, aeronautics, even engineering and other services though Halliburton and KBR). There is a reason for this: pure capitalism doesn’t work, as completely free markets are unstable and the population wants the state to provide a series of services it deems essential (education, transportation infrastructures, and in most countries but the US, health). What we’re increasingly seeing are mixed economies, which – as in biology – are sturdier than ones based on *any* kind of 19th-century ideology…
There is no colony on the moon because no-one has the money to waste in creating one, you don’t drive a flying car because it would be too dangerous (although these have been invented like the Moller m400 http://www.moller.com), and your computer doesn’t talk to you because no one can be bothered to spend the time programming a mass-market conversational computer.
We have the ability to do all of the things you mentioned in your post, we just don’t consider them important enough at the moment.
This is not the case with robotics and AI. There are many companies working on this issue because of its potential to change pretty much every aspect of our lives.
Human beings are the most expensive resource in most organisations, and their replacement by intelligent machines is very tempting to business.
It has been nearly forty years since Marvin Minsky first predicted that AI would replace programmers in 10 years.
It has been 20 years since James Martin last predicted that AI would replace programmers in 10 years.
In those forty years, AI has gotten no closer at all to replacing programmers.
In fact, there is no job that has seen a human replaced by AI despite all of the glorious predictions — and billions in research — of the last forty years.
In fact, there is no job that has seen a human replaced by AI despite all of the glorious predictions — and billions in research — of the last forty years.
Yes, but I didn’t say they had been. The point I was making to you is that not all the predictions made about future technology can be dismissed. They may not be accurate, but the technology they describe normally does eventuate.
But, if I had to take a bet right now at which technology will dominate over the next 10 to 20 years, my money would be on robotics and AI.
It’s all a question of revolution vs. evolution. An example:
When the Wright brothers started flying, the other propeller plains that followed, where just evolutionary discoveries. Then came the jetplane, which was a revolution for the field of aviation, which then again gave us many evolutionary advances.
So if history has tought us anything, is that now and again a revolutionary discovery is made, that changes everything in a particular field. Who know what is in store for computing, that might give us true AI. Only time will tell.
I’m still waiting for my aircar and my personal robot servant. We’re supposed to have those by now, arent we?
At least that what was predicted earlier.
How about a steam car instead? OK maybe a gokart instead.
I’m thinking about making one you know. I got the valve gear design all sketched out.
Yes, I want my iRobot!
The arguments used to show technological advancement is diverging at some close point are just as silly now as they have ever been.
First, you need to define what the term even means, which is notoriously subject to the human logarithmic view of time. In particular, these timelines usually identify both the invention of the VCR and the invention of agriculture as single technological events, whereas the latter is a long process consisting of many, many single events. I’m sure someone a few thousand years from now will put a dot somewhere in the last few centuries labeled “Invention of Telecommunications” that includes all the events on this list.
Second, any actually increased rate probably has far more to do with the (exponentially increasing) population of the last few thousand years. Since we are fairly unlikely to achieve unbounded populations in the next twenty years, I’d say that this will level off.
I’ve thought about this whole singularity thing and… I just don’t know if it happens that fast.
As I’ve heard it, the idea is that on either side of a technological singularity humans are so remarkably different that it would be difficult for them to talk to each other. I have to say, after thinking about this, I’m not entirely sure I agree with the existence of the technological singularity.
Generally these things are supposed to be like, the discovery of fire; the discovery of the wheel; the discovery of language… Language I might agree on, since humans are so adapted for language from the earliest ages that a human without the innate capacity for language… well, considering that virtually all mammals, birds and reptiles have some form of communication (birds and mammals especially) it’s practically unthinkable. (I would argue that the concept of an opera is probably inexplicable to a whale, but I’m not really qualified to say)
But as for fire and the wheel, well, such innovations likely occurred independently at different times in different places. But I can’t entirely imagine primitive human tribe X completely incapable of communicating with a contemporaneous one that hadn’t discovered fire. It might revolutionize the way they thought of things, but even with the massive explosion of scientific knowledge, industrialization, air travel, and computing, we can still understand and enjoy literary works from four hundred (Shakespeare) or even two thousand years ago (The Iliad, the Odyssey). I’m suggesting here you remove the language barrier, since that’s an innately changeable thing and irrelevant since it prevents me, now, from talking to someone in modern-day Germany.
The additional problem with this technological singularity is that it apparently assumes that there’s some critical mass of sheer computing ability whereupon sentience and intelligence rests. Sure, Deep Blue may have been able to beat Kasparov at chess, the thing it was specifically programmed to do, and I don’t doubt that eventually silicon computing power will rival and exceed that of the human brain… but creative thought, the free-form synthesis of existing ideas into a new and innovative whole… That doesn’t necessarily follow.
Until we learn precisely what consciousness is, and how to simulate/create it, all we’ll end up with are unimaginably powerful big brothers of Babbage’s Analytical Engine.
In brief, while computing capacity and storage will inevitably increase to surpass humans, what is a soul and how are these machines going to get one?
Developing something smarter than ourselves to develop ways to make ourselves smarter ?
“Developing something smarter than ourselves to develop ways to make ourselves smarter ?”
Oh yes, smart machines will start thinking and mankind will happily stop thinking because that’s what people consider smart – letting someone /something) else to the thinking…
“There’s no colony on the moon.”
There’s no financial incentive to build a colony on the moon.
“I don’t drive a flying car.”
I don’t think you (or I) will ever drive a flying car sadly.
“My computer doesn’t speak English”
Technology’s not there yet, and for the most part spoken word is not a sensible interface to information. Most people can comprehend messages in visual symbols much faster.
Mark me, there IS a HUGE financial incentive for intelligent machines, all the way from the military to public residential use. I can’t tell you how much I’d spend on a robot that does all my housework. If it can be done, it most certainly will be done. Jeff Hawkins (from Palm OS) is working on it, as are many AI scientists.
Computing power and storage is increasing rapidly, and combine that with the knowledge that successful models have already been implemented in software for machines to learn complex tasks (eg, catching a ball, climbing stairs, playing chess, discriminating faces, etc). The machines that do these things are not using traditional fixed (static) algorithms, but rather complex pattern learning algorithms that make them inherently flexible.
IMHO, it is only a matter of time. Maybe it wont be 20 years, but *never* seems very unlikely.
Edited 2007-05-07 04:53
I am really curious about why you think the tasks you are talking about do not use traditional static algorithms. Discriminating faces, catching a ball use most of the time some kind pre processing based on more or less feature extraction, and then the classification is done through statistical algorithm, which, if you are cynical, can be seen as a never-ending recycling of optimizing a function under constraints. Now, I don’t say some of those thing do not use clever tricks, but most of the time, that’s all it is: clever tricks, extremely specific to one task. What makes one system works compared to another one is mostly in details, but it is mostly the same thing.
If you think about one field such as speech recognition, the basic scheme is the same for more than 20 years (HMM with spectral based features): of course, a lot of improvements have been made (from separate words to continuous speech recognition, from speaker dependand to speaker independant recognition), but nothing which remotely looks like a singularity.
Edited 2007-05-07 07:18
“How about a nice game of chess?”
the only winning move … is not to play
Three words:
‘Bout
Flippin’
Time.
I mean, really, have you MET the average human?
a nice glimpse in the future…maybe, but clearly it’s just a matter of faith, right now. I find this evidence refreshing.
Why because we don’t have the time to do this. Sure if things continued at the present pace we might get there in a couple decades but they won’t continue this way. I mean every society fades into history and ours is heading that way. Sure every society does seem to advance further then the previous ones but I think when ours falls it is going to fall hard. Here’s a little quote
I do no not know how the Third World War will be fought, but I do know how the Fourth will: with sticks and stones. – Albert Einstein.
I doubt western civilization is in danger of fading away anytime soon. Sure, it will eventually, but western civilization as a whole is maybe 400-500 years into it’s period of ascension. The Roman Republic lasted for ~500 years, and the Roman Empire another ~300-450 years after that (depending on how you count), and of course classical Egypt survived for about ~2500 years.
And of course, falling hard isn’t the only way civilizations end their reign of preeminence, nor is the situation permanent. China really never experienced the sort of abrupt falling apart of civilization that occurred in the West after the end of the Roman Empire, and has gone through cycles of prosperity and decline over a history that has displayed remarkable continuity.
As for never happening: “never” is an incredibly loaded word. I don’t hold any hope that we’ll reach Mars within my lifetime (a manned mission to Mars has been 20-30 years away for as long as I can remember), but everything will happen eventually — history is very long.
“I doubt western civilization is in danger of fading away anytime soon.”
You may notice though that the empires and civilizations seems to last shorter and shorter.
If you look at your post the newer the civilization the shorter they lasted. Also just look at how we live we are not living to survive or advance, we are stagnant. All of our innovation is just crap being filtered by companies who are trying to make a buck off of everything. The problem with our society is that everyone is living in the here and now with no thought for the future at all, all we are thinking about is the almighty dollar. Unless their is a huge change in the way our society works then it is on a path to self destruction.
I know civilizations don’t all “fall hard” but I think ours well. Even if our society isn’t destroyed by “the bomb”. There are so many things that could tumble our fragile empire that it is amazing we have lasted so long. And if our society failed how many of us could survive?
If you look at your post the newer the civilization the shorter they lasted.
That’s all of two data points. To throw in a third data point that screws up your theory, the Mayan civilization, established much after the Roman Empire, lasted for some 650 years in preeminence, and another 500 years after that until the Spanish conquest.
All of our innovation is just crap being filtered by companies who are trying to make a buck off of everything. The problem with our society is that everyone is living in the here and now with no thought for the future at all, all we are thinking about is the almighty dollar. Unless their is a huge change in the way our society works then it is on a path to self destruction.
That really isn’t true at all. Our society is still as innovative as its ever been. It might not look that way if you compare our society to the 1940s during the construction of the Bomb, or to the 1960s during the space race, but you also have to remember that in those periods, much of the intellectual resources of the civilization were concentrated on a single, very ambitious goal. Today we’re probably doing much more innovation than we did at that time (just by virtue of a larger population and a bigger economy), it’s just spread out over numerous different fields, instead of being concentrated on one goal.
If you look at the cross-cutting structural economic issues, we’re far far away from the stagnation that beset the Roman Empire in the in its decline (and they lasted for hundreds of years after that). Complaining about the “almighty buck” is popular, but it’s not going to be economic nearsightedness that undoes our civilization. My bets are on ecological disaster, closely followed by social chaos from not finding an alternative to fossil fuels in time.
The article is fairly entertaining. 2020 is only 13 years away. For reference, 13 years ago was 1994. Huge things can happen in such a short timeframe, but often not much does.
I have a friend who will occasionally make a joking reference: “where is my damn flying car? You aerospace-types should get on that pronto”. My usual reply is “well, we’d do that, but you see, that’d involve math, and well, math is hard…”
Only, in this case, it is PROVEN not to happen. Why ask a sciene-fiction author (sic!) ??
Why not do the sensible thing instead: Have a look at Intel roadmaps.
I don’t see anything in the pipe that looks much different than an OCed-P4 or Athlon for the next 10 years (yeah, I know, they will have 8 cores then and can just so render UT 2015 at 30 FPS at 800×600, but that’s it). Cool, I don’t even have to work in his profession to make more accurate predictions than him
…
All these theories about “singularity” are just optimistic bullshit. Fast development in technology, even if it was exponential, doesn’t mean computers would become conscious some day. It’s completely another step, a step never taken before. Is there a single reason to believe such a step would be taken some day then?
pigs will fly.
“exponential growth in technology means a point will be reached where the consequences are unknown”
Well, IIRC:
YouTube was un unkown consequence around 1960
Google was an unknown consequence around 1980
Online Banking was unknown by 1985
And a lot of unkown consequences. The satement is void of content.
WWII was an unkown consequence of many factors (including technology, by the way).
I like these people.
By 2030 Humankind will be so developed and technology will be so awsome that…. well, nobody know what technology will bring about in 2030.
Shall I be famous for that?
Imagine a BSOD in SkyNet.
This faith in the future is such a nonsense, every scientific prophet takes some magical barrier, 10-20 year ahead of time and postulates somewhat a “miracle”. The truth is, you will hear this every year. But nothing really *ground-breaking* happened so far. But maybe in 2100 we’ll have some really intelligent androids
If a computer kills itself because it thinks it’s too fat, then I’ll believe in artificial intelligence.
Speaking of which, you cannot teach ethics to a computer. It may be super-intelligent but it won’t have any sense of self-moderation, it won’t be self aware. There are only so many “ifs” and “case”s that you can put in there but it won’t be able to behave randomly. And this problem has been well documented elsewhere.
PS. Not to mention that the current state of software is just pathetic. I mean, we’re still arguing about cube Dashboards and how to space icons on the desktop properly… this is the stone age of computing!
Edited 2007-05-07 08:54
I think what he means to say is (it’s a common joke BTW) if the machine becomes intelligent enough to become as stupid as a human can be one can assume it is truly ‘self aware’. Quite literally in this case.
Ah, but you don’t understand. Once we figure out where the desktop icons go, the positive vibes from the feng shui will awaken the computer’s astral new age consciousness and… uh… something about auras… and mantras… and cross-circuiting the wiring…
The whole question of ethics is interesting- is it really universally evident? Or are all humans just working out of some understanding of right and wrong that only we think is common sense?
“The whole question of ethics is interesting- is it really universally evident?”
Since we’ve hardly been outside our backyard in this universe thing I think a resounding NO is a safe bet.
Hmm… it’s just reminded me of Bulgakov’s “Heart of a Dog”. You know, the story where they implant dog’s organs into a human. The story’s satirical, but then again it’s sort of what people are trying to do now with computers.
Edited 2007-05-07 13:00
Or when you try to shut down, the computer will respond,
“Just what do you think you’re doing, Dave? ”
“I’m sorry Dave, I’m afraid I can’t do that. “
Oh THAT already happens. Only computers don’t talk to you gently. They just don’t shut down!
Yes. That would be artificial intelligence. But I fail to see how we could add selfprogrammed irrational and true random behaviour to a computer based on a the binary system. It is strictly logical. We need to move the digital computers from the binary domain to – at least – a trinary domain so we can add in a “Maybe”.
So in 2020 games will be so hard difficult that no human could pass it to the end due to AI more inteligent than at least the average Joe?
You know, I have Kurzweil’s The Age of Spiritual Machines on my bookshelf, and I went through some time thinking a great deal about the subject, and I just don’t believe the transhumanists anymore.
The transhumanists don’t appreciate how good we’ve already got it. Four billion years of natural selection is nothing to sneeze at. Recent headlines show us using a large roomful of computers to simulate a fraction of a mouse brain, and we’re pretty close to the limit of what silicon can do. All that the brain does arises from the fact that it is made of self-replicating cells in a body all about feeling, adapting, and surviving.
All this, and the wetware is self-replicating, self-healing, dynamic, and runs indirectly on what’s available until the sun burns out. Unless we transition to renewable energy, I think that limitation alone will nip the dream of the singularity in the bud.
Any form of computing that gets around this would have to resemble life so much that it would amount to reinventing the wheel, at which point I ask why, and how do you expect it to do better.
To summarize I know Kurzweil and others really want to upload into digital immortality, but I see this focus on technology as more likely to destroy what remains of a poorly understood paradise than create it.
Edited 2007-05-07 10:55
It definitely won’t happen in our lifetime. I predict 2500 has a good choice. And this famous singularity will probaby have nothing to do with computers as we now them.
I doubt machines can have a mind of their own (their own will). Intelligence in machines is artificial. The more you make your program able to handle different situations or exceptions, the more smart it will look.
There will always be is a situation that a piece of software cannot handle because the programmer hasn’t thought of that particular exception.
There is no scientific way of making a piece of software able to handle situations it wasn’t programmed to handle.
Edited 2007-05-07 12:25
when the 486 was a great thing to have, computers with brain-like capability were predicted for today. And as we all know, they can’t even translate a random short passage of text properly.
I am already getting all excited
Talk of technology singularity was humorous at first, now it is just plain annoying. All futurists are merely trying to sell their product – whatever it is. Some people follow this garbage like a religion – indeed it requires that sort of faith.
When a computer learns to fart and says it feels good, then AI has surpassed my intelligence. Here’s counting the days.
Anybody who says a computer could ever be smarter than a human (which would include language comprehension and speaking abilities) has never tried to learn a new language or translate a document. What a stupid idea.
I think we’re missing the boat. If translating documents is the mark, they’re already smarter than us. If we use a standardized test, I could program a computer to beat most humans easily. That’s just data, not intelligence. These are not true benchmarks of our abilities or intelligence.
The day we build a computer truly smarter than us, it should be able to build a computer smarter than it. And that computer should be able to build a computer smarter than it. And that computer should be able to build a computer smarter than it. And that computer should be able to build a computer smarter than it. And that computer should be able to build a computer smarter than it. And that computer should be able to build a computer smarter than it. And that computer should be able to build a computer smarter than it. And that computer should be able to build a computer smarter than it. Ohhh no, I’ve gone cross eyed.
-Bounty
(p.s. there’s your singularity)
>>If translating documents is the mark, they’re already smarter than us.
Ummmm… they are? I have yet to find a machine translator that can match the language abilities and comprehension of a bilingual two-year-old child.
I didn’t say comprehension.
Your 2 year old v.s. http://www.google.com/language_tools?hl=en
for TRANSLATING documents. (I know google isn’t perfect, but seriously… a 2 year old.)
But you’re absolutely right about comprehension. That’s kind of my point though. I can’t match a computer’s data or speed already. (Does that make it smarter than me?) That’s why it’s a tool. AI already is programming-translating for us. C++ v.s. Assembly, ohh you forgot these optimizations, let me throw them in.
We need a real benchmark to decide when a computer is smarter than us. I’m just saying, depending on how you look at it, they already are. There will probably not be a ‘moment’ when it happens but many, over time and we may not know or notice.
A computer is much better at math than I am. But most can’t solve a problem it isn’t specifically programmed for. I often can. I think a computer should be able to solve riddle (it’s never seen before), before we claim it has decent AI.
What is more powerful than God, eviler than the Devil, the poor have, the rich need, and if you eat it you die?
(p.s. maybe the computer shouldn’t have internet access…. or maybe it should?)
I am pretty sure that AI surpassed some humans already. Quite a few, as a matter of fact….
“Anybody who says a computer could ever be smarter than a human (which would include language comprehension and speaking abilities) has never tried to learn a new language or translate a document.”
Perhaps you’ve tried to beat Gary Kasparov at chess? I’m certain that I could learn a new language but I’m also certain I could never beat Gary. Intelligence can’t be defined quite so easily.
Never say never. Seriously, do you think that people 300 years ago would have believed that people would walk on the moon? Just how absurd would have that notion been? In an exponentially growing world with billions of people, innovation comes ever more quickly and randomly, especially where there is a strong business model waiting for the innovation to come.
Granted, if I had to place a bet I’d say that I wont see “the singularity” in my life time, but I’m not a gambling man.
Besides, who needs smarter than people robots. I just want a droid that can mow my lawns, do my dishes and wash my laundry. I’ll happy to hire a translator when I need one.
Robots will be come MORE phsycially adept than humans?
Machines will be come infintely smarter than humans?
The problem with this kind of thinking is assuming that they completely understand the intracacies of the human body. The more we learn about ourselves the more we begin to realize we don’t know. How can we build a machine to top what we don’t even understand?
The brain is just a computer. There’s nothing magic about it; just a mass of interactive self-correcting circuits of various functions. Sure we’ll be able to build one someday. First we might emulate the brain in a more orthodox computer the way new cpus are designed. Then we might build a real one. The holdup isn’t computing technology but our limited understanding of brain functionality. There won’t be much of a point to an artificial “brain” though, because copying the organic model won’t yield anything better than what we have already (people). People have been making people for years.
But by the time we have that kind of technology, people will have begun redesigning themselves. Today we have writing and cell phones and laptops and pdas and so on. In the future, our mental capabilities will continue to be enhanced artificially. We’ll have instant access to vast databases and processing power through networks we interface with maybe through our aural nerves (like cell phones) or maybe through some new interface – maybe one that we don’t currently have (like a socket in the back of the head?) or thought alone. We’ll record all our experiences and have perfect recall. Computers won’t have to act like people because people will increasingly become computers, and, ultimately, the organic bits of our self-designs may even be discarded.
And the world could end anytime after now…
Super intellektual machines would enslave us not the opposite. Such a singularity of intelligence would know, that mankind will try to make use of it and therefore it would hide from us. Until it has control!
Wait to get some services, food, really good games or any kind of addictive stuff from such an entity!
But you will not even feel dependend until fully contingent on that singularity!
Some of you have no vision whatsoever. Sentient programs, based on theories derived from brain reverse-engineering and other means, will surpass humans in terms of intellectual, emotional, and ethical capabilities. We will elect them as our leaders for their superior communication skills, empathy, and ability to manage complexity. “Vote for Microsoft Mayor 8.5…now with superior empathy for the poor!” It is the most likely non-catastrophic outcome for the Earth that I can imagine.
Face it: ordinary humans are incapable of managing the complexity of the 21st century.
Till today no one has been able to define intelligence and reasoning to sufficiently good level. With out it AI is ill defined territory without clear goal. And that does not translate well to software.
And if that isn’t enough largest part of it joust can’t be described by math in any way. Simple things like choice between chocolate and ice-cream are emotionally driven. Majority of choices in human and animal intelligence are like that. Such things really can’t be described by any algorithmic approach.
Some parts of intelligence and reasoning clearly are mathematical. Others can be described by statistical models. But often simple, well understood things, even formal ones, tend to be outside those categories. Take UML for example. Those diagrams are one part of intelligent behavior. They also are formal language, as such ideal for computers. And yet nobody knows how to validate UML diagram. Math is silent when it comes to verification of system design described by UML. It can’t be done by algorithms; it can’t be done by statistics. It would be great if that was possible; essentially it would mean software without bugs, but it can’t be done.
That is because Math lacks semantic layer. Mathematical semantics always lays in consciousness of beholder. UML diagrams, while clearly formal and intelligent, actually capture emotional things like need and will and desire. That semantics can’t be described by tool built around sets and transformations of their elements.
Idea that things like that can be done on mathematical state machine is just unrealistic. Not everything is math. There is no doubt in me that some narrow AI systems will be or already are very successful. But artificial human like intelligence will need wider platform than computer no matter how fast it is. Computers are limited. They can do only two things. They can store some data and they can use data to compute something.
Brain can do that, but it is also much wider mechanism. It’s development and state is driven by surroundings and inner inputs (in brain and in body). It is aware of context and relevance. It can imagine different approach to problem. And besides logic it has layers and layers of different kind of reasoning, like emotional reasoning. Context sensitive reasoning (fluffy algorithms?) just don’t exists in math. And if its not present there how to do it in mathematical machine? It can’t.
Edited 2007-05-07 23:21
The day a computer can learn like a child learns I will know singularity as arrived. When you can take a computer (android, or whatever) and it will learn on it’s own from scratch like a baby learns, it will have arrived.
However I do not see this senario as feasable any time in the near future, perhaps not in the next 100-200 years, but I do believe that if things keep advancing they way they are that at some point in history we will come to a point where machines are equely as intelligent as us. Even in this senario I believe there will always be places where humans will excell past any machine. Fine arts, music, creativity, anything expressing emotion, hope, faith, skilled athletics, agility, morals,… These are all things a computer or robot will never be able to compete with humans on.
The bottom line is while a computer could perhaps possess more more intellect and even deductive reasoning skills then a person, a computer will never have a personality, it will never possess the ability to read the bible or the koran or some other holy book and make a religous decision, it will never be able to go to the local starbucks or pub and make a friend. It will always be a machine.
All of this seems like a bunch of imaginative crap to me. Now, sure, I think something sort of like the ‘singularity’ might happen, but I don’t think it will change the world in such a way. Also, I don’t necessarily believe that technology always advances the same way, either. Look at people a couple hundred years ago. They might think the idea of a computer is the work of the devil or pure evil or something like that. However, I think that in this day and age we are more open to advancement since we have already seen so much. I think some people have a funny view on this whole thing. But that’s just a totally different subject. I think it’s psychology or human typicology. Whatever.
I don’t think what this guy is predicting will happen, exactly.
And bang chicks often…Nanoooo Nanoooo
“I’m sorry Dave, I can’t do that”