Artificial intelligence is still a long way from delivering the human intelligence in robot form that has long been common in science fiction.
Artificial intelligence is still a long way from delivering the human intelligence in robot form that has long been common in science fiction.
Apart from “super AI”, scifi also envisioned colonies on the moon in XX century; or such flying cars / aircraft from “our” times: http://goo.gl/9TLhg (Wiki Unicode URL, tends to work weird; and we can even build them – basically just take a Harrier, remove wings and canopy – still a horrible idea vs. “boring” reality: http://commons.wikimedia.org/wiki/File:Ryanair_Boeing_737-800_appro… ); or that videocalls will be the mode of distant communication (while, in fact, we largely went “back” to text)
OTOH it didn’t really envision the ubiquity of computers, mobile phones, or digital capture and storage of images and audio.
Or Rosey the Robot versus Roomba difference.
Maybe works of popular fiction tend to be no better than background noise at predicting future, at least as far as what’s commonly depicted in them goes.
Besides, that “human intelligence in robot form” is far older than it seems, for example with golem – one of old tricks in myths or fairy tales. They are tools of storytelling; and sort of cargo cults overall, modern mythologies really – in those we always wished for something silly to be true, often also naively extrapolating “known” things or observed rates of progress (like with those aircraft above, envisioned during the times of rapid advanced in marine tech; or “spaceplanes” in the scifi of ~40s, during rapid advanced in airplane tech – worse, possibly inspiring some later dead end projects, large and expensive enough to suck out funding from more sensible paths)
A closer term for scifi would be probably tech fantasy …after all, there’s usually not much place for science in it (as in depicting an actual scientific process, or having a minimum of respect to the conclusions it already gives about our world)
Overall, we sort of had this topic not a long time ago… http://www.osnews.com/comments/26004
PS. And perhaps our universe already shows us that “thinking machines” are at least unlikely, maybe even impractical. After all, something like this should have insane evolutionary advantage, hence it would possibly show up & take over already – if not within our biosphere (obviously not, for now), then at least within likely billions other biospheres in the universe, spreading and likely massively transforming it, enough for it to be visible (possibly even reaching and ~consuming us by now? ;p )
The world already has thinking machines or even replicators: they’re called “civilisation” or “life” …it’s an open question if very much more efficient ones are feasible.
Edited 2012-06-23 21:18 UTC
will arrive when the first machine reply to its master by itself: “not now, I have a ‘chipache'” out of pure laziness.
Edited 2012-06-23 21:24 UTC
I’ve studied artificial intelligence in University, but neither I nor my teacher, nor my colleagues, nor any of the authors we studied had the slightest idea how and when the first true artificial intelligence will be produced. Right now it’s science fiction. Computer Science can’t predict when, or even if, artificial intelligence may become true.
It wouldn’t surprise me if some smart guy will make a proof shoving that true artificial intelligence in practice is not possible. I have the feeling that it might be the case. Can’t prove it, but right now nobody has any idea of a computer model of a “real” or “true” AI. More than that, there are actual proofs that most models and AI algorithms in research today won’t materialize in a “real” AI.
And Turing isn’t right. For him, during an q&a session, if someone can’t tell if the actual answer is given by a human or a machine, the machine may be considered “intelligent”. Right now no machine passed the Turing test, but even if some machine will pass it in the future it won’t be an indication of it’s intelligence but an indication of the passion and art of its programmers.
unless there is some sort of unknown magic juice that allows for bio-intelligence (Humans) it should be mathematically provable that Intelligence is a result of a chaotic system of massively parallel signal interconnects (that is what the Human brain is)
See The Emperor’s New Mind or Shadows of the Mind by Roger Penrose. He argues that a computer as we know it cannot achieve consciousness for Quantum Mechanical reasons. Brains and computers are not the same.
What Penrose proposes is highly speculative at best – there’s no evidence for his hypothesis, even some serious evidence against it (starting with totally unlike scales of neural processing and quantum effects likely possible in the brain)
Besides, intelligence != conciousness; it’s also not even really clear if the latter (together with free will) actually exists in the first place.
And nobody really argued above that brains and present computers are alike, quite the contrary – but they most likely ultimately can be, by the virtue of working in and exploiting the same physical reality.
BTW, in the last few decades tons of pseudoscience invokes quantum mechanics to support the ~”mystical” sphere / subjects.
If your statement is “Brains and Computers we have today are not the same” then I would agree.
If you are saying “Brains and all computing devices that humans can build will never be the same” then you are wrong.
A brain is simply a massively parallel mesh network of signaling nodes. at their most basic level the nodes pass digital signals using chemical vectors. Unless you can show that it is impossible to construct such a computer outside the reproductive process of humans, your position is unsupportable.
It has already been done by IBM.
http://articles.cnn.com/2011-08-18/tech/ibm.brain.chip_1_experiment…
But the OP’s point is: Intelligence != (Not equal) Consciousness. Let alone free will. If you don’t believe in free will, just think of it that you have choices to be made every now and then and was “conscious” enough of your choices. While AI’s cannot think of himself as conscious, though he can be programmed to act as one, it is stupidity to think that he is conscious.
(quasiquote (While allanregistos’s cannot think of himself as conscious, though he can be programmed to act as one, it is stupidity to think that he is conscious.))
There, I fixed your statement. The only reason I have for believing that other humans are conscious is that they act conscious. Therefore, if a robot or computer acts conscious, I will assume that it is conscious.
You don’t have a free will or conciousness, you’re only wired to stupidly think you do …prove this wrong.
And/or: you were forced into this world without your consent, and the most reliable indicator of your life is when/where/to whom you were born and raised – oh, and if one would want to opt out …well, that just happens to be a very bad thing under the schema which participated in wiring you, forbidden under the threat of eternal punishment.
(that IBM chip is just an early effort on a looong road of future efforts, nothing “done” about it)
Edited 2012-07-01 00:12 UTC
I have the impression that it can happen before 1000 years from now, which would be, from an evolution standpoint, extremely fast.
Of course, all sorts of things can happen, firstly a global ecological catastrophe and complete natural resources exhaustion that will appropriately send 1000 years backwards our descendants.
Evolution can occur very very quickly in a short time span, or very very slow over a lengthy time span. Generally, the more complex the organism, the longer it takes. But, that’s shown not to always be the rule.
What defines intelligence is still highly debated but if we can replicate the tools which produce intelligence, which I believe eventually we will, I see no reason why intelligence can’t be manufactured or at least encouraged by our influence.
I believe once “we” have a matured and thorough understanding of what life is and how it’s created in nature, we’ll realize it’s far less mysterious and possibly the product of divinity rather than a `simple` equation.
they have found that proteomics plays a huge role in how fast new evolutionary traits express themselves.
In a stable environment, the proteins responsible for maintaining a stable expression of genes are able to keep up with the system, under stress however this falls apart and all sorts of stuff happens. the shorter the reproductive cycle of an organism the sooner these modifications to their new gene expressions can show, but it can happen in just a few generations under the right conditions.
Okay, your first assignment would be to explain in technical details more than enough of what wikipedia or any ‘edia:
http://en.wikipedia.org/wiki/Prenatal_development#Fertilization
can tell you. You can also include why a unique human being will become a homosexual in her/his later life based on these technical details.
True AI is not possible according to the data. Only AI is possible but not true AI, where true AI = conscious human brain. While man can eventually develop a human/brain-like machine, it won’t evolve into a conscious machine being(although the AI developers can program the machine to act as one).
Whatever its exact mechanisms (and not yet having a clear picture of them – or of prenatal development in general – doesn’t change anything), homosexuality appears evolutionary selected for, to a degree – for example, sisters of homosexual men are statistically more fertile (WRT to homosexual women… well, let’s be honest, they’d be more or less forced into childbearing anyway, any traits hardly selected either way – perhaps why women are supposedly a bit more “undetermined”)
But you just prefer to make up “data” or “evidence” (in other nearby posts), wish them into existence out of thin air (or, rather, from your ~Christian mythology, it seems – which BTW at some point had a problem even with the moons of Jupiter – but this is not the kind of fiction under discussion)…
…while there appears to be nothing which would block “true AI” as far as our understanding of the universe is concerned (which is BTW brought by ~science, not mythologies – in fact, what you can be pretty damn sure of is that they concede more and more; virtually everything was being explained by myths and divine intervention at one point or another, virtually everything what they once claimed turns out wrong, over time, gradually chipped off; no reason to go back now)
At the very least, we should be able to do a “brute force” approach of running a full human brain in software …and over time streamlining it, optimising, modifying – until, from some point on, it won’t be human any more. It will be more.
Edited 2012-07-01 00:16 UTC
What are you talking about? I keep mine in my garage, next to my flying car.
… if it is not equipped with a “moral code” or something that resembles a conscience. Sure, you have Asimov’s Laws of Robotics, but I don’ believe three rules are enough to accomodate for the decision-making process of a thinking computer.
Being able to think and to organize in societies implies some sort of “moral code”. Even relatively primitive animals have it.
It will likely be different from human “moral code” but provided that machines have some interests in keeping us around (either for safety, as an effort to “preserve an environment”, or simply as a farm stock) that doesn’t automatically mean our extinction. Just like we didn’t kill all species of animals.
Yeah, about that… we’re already most likely behind the ongoing extinction event, which will be one of most rapid ones in geological record – biologists estimating extinction rates at least hundreds times higher than background level, large part of all species gone by the end of this century.
(and what would be really “funny”… http://en.wikipedia.org/wiki/Medea_hypothesis )
So, unless we’re tasty …or cute & cuddly – but I somehow doubt AI will share our values here (after all, they seem to approximate… a typical juvenile primate; it’s possibly a vestigial form of our biologically determined parenting instincts)
Hey please stop right there. It is not a Machine’s “moral code” but the developer’s moral code injected into that machine. There is no such a thing as a Machine’s own consciousness or free will now or in the future according to the evidence of computer science.
Please remove the silly modern myth.
In my mind, we’ve already successfully achieved artificial intelligence for many years now. Everything from everyday automatic sliding doors to computer fingerprint analysis to artificial aircraft pilots are really in the realm of “Artificial Intelligence”, that is intelligence from artificial origin.
Some technology may already be more intelligent than average humans, especially within speciality domains.
I think the reasons people are disappointed with AI today are threefold:
1. It’s artificial.
This may seem dubious, but many people don’t consider computers intelligent BECAUSE their intelligence was programmed by a human. They want to see intelligence from a self learning computer. And I think we’re starting to see more progress on that as computers get more powerful.
2. It’s virtual.
A computer game clearly can exhibit intelligence, but it’s less realistic because it’s on screen. Developers are accustomed to abstracting concepts, and I believe we, as developers, can appreciate abstracted intelligence more than a typical person can. If the exact same intelligence found in sophisticated AI could be projected into the real world, it suddenly feels less “artificial”.
3. It’s not conscious.
The role of consciousness as it relates to AI is poorly understood. The truth is we don’t know if any AI can truly be conscious. Sure it could act as though it were conscious, it may even have learned how to act consciously by learning how to emulate humans on it’s own, but even then I’d have trouble overlooking the fact that it’s just a bunch of sophisticated deterministic algorithms – it can’t “feel” anything, can it?
On the other hand, if an alien creature came to earth and claimed to be conscious, most of us wouldn’t even second guess that, but how would we know it wasn’t lying? If it was sufficiently intelligent, it could easily fool any of us into believing it were conscious.
Perhaps this is what people are looking for with AI, an intelligence that can fool us into believing a computer is conscious.
Edit:
How exactly would it be possible for an AI to prove it’s own consciousness? Sceptics like myself would always point to the code and say that it’s *emulating* consciousness without *being* conscious.
Edited 2012-06-24 05:59 UTC
The computer characters in a game act dumber than dogs. Being on screen has nothing to do with it. Actors in a movie don’t come across as dumb.
When a computer can learn a new language like a 0-6 year old child can do. When you computer understands what you are doing. When a computer can drastically improve over time without a programmer. That is when you can call it intelligent IMO.
You need to look at a bigger picture. Nothing improves over time in isolation. Despite the fact that a lot of our intelligent machinery is coded in DNA (biological computer program) we still require massive amounts of input from our parents, school teachers, books, society, etc. All these external factors continually program our brains over many years. Otherwise, how do you tell the difference between good and evil, or courage and cowardice?
Complex AI is definitely possible, but will need a lot of research, development and evolution before it can overtake human intelligence.
I think you have nailed the idea that a man’s moral code is subjective(he alone can choose of what is good) w/o any external influences or some religious moral law. This argument mostly comes from the atheists when they formulate arguments against Christianity. Just do an experiment, isolate a human being (w/o any external influences) then put him on a situation where a baby was rammed by a vehicle, and lets see what he/she will do.
Our morality comes from being a primate, a social animal, from the evolutionary advantages it gave (to a point…) – roots of it can be easily found at the very least (but not only) in chimpanzees for example.
Also, later, from evolution of societies – moderately stable (~”moral”) large ones were more successful, able to absorb or annihilate others.
But that you would even think about such “experiment” shows what your morality is worth…
So virtually all primates cannot be called intelligent? (likewise cetaceans, cats, dogs, going through many tool-making & using birds, “down” to octopuses or even swarm intelligence of some insects)
It’s a spectrum. And humans find it very hard to “drastically improve over time without a programmer” – there are enough cases of severely neglected children and effects of it, even some feral children (while most of such stories are made up, there are few rigorously documented ones, for example: http://en.wikipedia.org/wiki/Genie_(feral_child) ).
IQ and fertility rate are inversely correlated …might be also because of more focus each individual child can get, when there is less of them.
Well, when you mix some good (at being “realistic”) deathmatch FPS bots with average human players, immediately telling them apart is not always so clear.
I am simply saying what I would expect from an intelligent AI. I am not talking about other animals but I think birds act smarter than the smartest computer characters(and birds aren’t that smart).
But why do you have such insanely high criteria with AI? Again, not even humans entirely measure up to all of those three there* …NVM intelligent(?) animals.
* Plus, the first criterion arbitrarily demands from an AI to follow the same process of ~knowledge acquisition as humans – missing how the whole point of AI is inexpensive mass-production and distribution of it.
WRT 2nd – doesn’t a mobile phone with auto switching of situation-based states understand what you’re doing?… (within its area of expertise)
Edited 2012-06-24 16:36 UTC
Fergy,
“I am simply saying what I would expect from an intelligent AI. I am not talking about other animals but I think birds act smarter than the smartest computer characters(and birds aren’t that smart).”
I think you’re underestimating how accurately computers can simulate things – even to the point where you couldn’t differentiate between the real and artificial intelligences. But the problem is the computer lacks a natural physical form and that’s a dead give away for the AI. Normal people aren’t accustomed to abstracting intelligent actions from their physical actors, but once you get used to doing that as we often do in CS, then you’ll realise that most AIs are actually within reach.
Unfortunately technology isn’t at a state where we can conceal supercomputers and their energy source within a natural body. While that’s surely a disappointment to enthusiasts, the opposite is theoretically possible: taking real animal brains and wiring them up to a virtual, albeit limited environment. You could end up with real animals and AI animals interacting together and never suspecting that the other is different. We might even setup a scenario where a real animal has AI offspring, or visa versa.
Edited 2012-06-24 17:35 UTC
I would like to see a company that makes AI. Game companies could license the AI. You could tune the AI with parameters like: want to live, scared, passive, language,hungry etc. This would make games far less predictable but still very much enjoyable. Games like Deus Ex and Hitman are about multiple choices but are heavily scripted. This is because the AI company that I mentioned does not exist.
Fergy,
“I would like to see a company that makes AI. Game companies could license the AI. You could tune the AI with parameters like: want to live, scared, passive, language,hungry etc.”
Games like globulation already do things like that. Not that I’d promote it as a prime example of good AI, but just saying…with the exception of “language” those things seem to be pretty basic.
Language is rather different though. since it’s highly correlated to one’s environment. Every species has it’s own ways of communicating. Consider elephants using seismic communication, bees using physical gestures, whales using whalesong, birds chirping, etc. These things are unrecognisable to us, yet they are languages for those who use them. With proper training (programming) some animals can learn to understand human languages. Even a human being needs years of continual language input to be trained, and we’ve built scholarly institutions just for this purpose.
Why should we hold computers to a different standard?
Here is a group that is working on more intelligent game ai:
http://opencog.org/2010/10/opencog-based-game-characters-at-hong-ko…
Edited 2012-06-25 12:32 UTC
Fergy,
“Actors in a movie don’t come across as dumb.”
Interesting, but the actors in a movie are just following a script, and to that extent I would argue they ARE dumb in this respect since following a script does NOT require intelligence.
You can nitpick and say they need to be able to read the scripts and interact with other actors in order to do their jobs (which requires some intelligence). However actors don’t technically have to understand a script, so that’s setting a very low bar for “intelligence” in my opinion. One which computers could probably achieve in the short term if it weren’t for the virtual/physical barrier I spoke about earlier.
“Hey, I’ve got an idea. Let’s create an artificial brain.”
“Yeah, that sounds cool. Let’s get started.”
Scientists start playing around.
“Hmm… pray, tell me. How does the human brain work?”
“Dunno…”
When we don’t fully understand how our brain and mind are working, how are we to recreate it succesfully?
Talking about artificial intelligence means that there supposedly is a non-artificial “intelligence”.
Seeing the difficulties to define “intelligence” and to measure it (IQ) the whole discussion get to be very odd. Look at the IQ-tests on the internet, they are good for a laugh only, mostly being very language-specific or very specific for people from a certain social status. Those designing the IQ-tests know that it is extremely difficult to measure what you cannot define clearly.
We will see “machines” that can “learn” I am sure since we build more and more systems that adapt to he behaviour of the owner, but to gain conciousness will need a giant step in understanding what this is. And we, the human beings, have still not reached the point where we understand conciousness enough to simulate it.
Similar issues you mention with intelligence also apply to conciousness – recognizing it in the first place might be problematic (does mirror test do it? Or maybe other tests, or perhaps observations hinting at some theory of the mind in few animals?)
Meanwhile, we have quite poor grip on our own minds… (go through a list of cognitive biases; or consider that split-brain patients are virtually unchanged, basically just with some “glitches” – while we very much believe in monolithic me; or how modern neuroscience equipped with tools like fMRI casts some doubt at “free” will; or placebo, and how adamantly people can defend its results)
Perhaps we usually essentially believe in strong conciousness of other people (and limit it to people), also because of how we like to perceive ourselves (and contrast it with “lesser” life forms) …however, who knows how many philosophical zombies are around us each day ;p
But seriously, most of the time we are in a bit “mindless” & automatisms-driven state, anyway (I suspect that’s how being an animal largely feels like – just with rarer and smaller “awakenings”, if any)
PS. And why does the ad in this OSAlert story direct me to a local classified ads website, using a banner which mentions that you can find kittens there?… (and depicting some – not sure if it’ll work, but: http://pagead2.googlesyndication.com/simgad/1656137121673980848 )
Edited 2012-06-24 13:13 UTC
They became tory politicians.
I watched Tron in the early 80’s (I still love it (not the new one however, what was that???)).
There is a scene where Alan (the guy that writes Tron) mentions that computers will be thinking “soon”.
It’s now beyond the future that Marty McFly stepped into, and we still have no “thinking” machines.
I personally believe that like time travel, both technologies will never truly exist.
I believe we can mimic intelligence, but I don’t believe computers will be able to grasp “though” as we do, not now, not in another 30 years, not ever. Again, this is just based on observations over the past 30 years of living in the industry.
I think the problem is that we underestimate the brain somewhat. I think we are only now beginning to get an idea of what we can actually do.
thavith_osn,
“I personally believe that like time travel, both technologies will never truly exist.”
Well there’s a pretty big difference between the two. Technology for time travel cannot exist because the rules of nature as we understand them don’t permit it. I don’t think anybody would claim that physics rules out artificial intelligence in the same way.
“I believe we can mimic intelligence, but I don’t believe computers will be able to grasp ‘though’ as we do, not now, not in another 30 years, not ever.”
I’d like you to define precisely what you consider to be “intelligent”. It seems there’s a strong tendency to shift goalposts in the field of AI.
Like zima already said, there’s a risk of setting the bar so high as to rule out animals and humans. If we’re to be objective, our litmus tests shouldn’t focus around humans proficiencies but instead be inclusive of any intelligent life in the universe.
Here’s a challenge: come up with a satisfactory litmus test that animals and humans can pass but ultimately computers cannot.
No quite. Many theories support the ability to travel through time in one form or another, directly or as a byproduct of other processes. Regardless of what the true answer is, nature itself is not the governing body.
Our knowledge in general is still very immature. Because of that, our ability to comprehend vast and complex subjects like these is severely limited. Simply put, humanity is sucking its thumb and wearing diapers. And even that may be giving us too much credit.
While it is fairly easy to find theories (or, more precisely, interpretations, thought applications and experiments of some established theories) which appear to permit “time travel” as understood in popular fiction*, it usually comes with strings attached such as “having a very localised supply of energy greater than produced by a large galaxy” or “assuming an object of infinite length rotating nearly the speed of light”…
Overall, it’s quite safe to assume that nothing will ever attain the fiction-type time travel simply because we would most likely observe such by “now” (one of more sensible things would be, say, to move “back” your ~civilisation as early as you can, when the universe was more dense)
Also: http://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm
*because, really, we do it all the time, just within the confines of what this universe appears to be – and it can be seen as if entirety of it travels at the speed (just trading between space and time aspects of it)
Conversely, and most tellingly: if we would be able to demonstrate even quite basic AI (like the fairly old chess programs that I mention in the previous discussion about AI, linked in my 1st comment in this one) to the people few generations back, I’m fairly certain they would be mighty impressed (…at least, at first), if not outright suspecting divine influence (vide cargo cults). I believe chess was held to be a good test of AI 50 years ago…
It even has a term: http://en.wikipedia.org/wiki/AI_effect
People seem to expect from AI to be at least on par with the best humans, in all “higher” fields. While… to be worth it, it needs only to be better than an average human practising only its one particular discipline or activity (AI flying airplanes doesn’t have to now anything about poetry, for example; it even shouldn’t have too much in common with AI flying highly manoeuvrable, highly evasive / survivable cruise missiles, beyond similar “understanding” of aerodynamics)
There’s one perhaps even more curious term http://en.wikipedia.org/wiki/Moravec‘s_paradox …it seems that “animal traits” are harder.
Nostalgia?
The first Tron is a bit… dreadful, as a movie (and among the people I know, I’m not nearly the only with such view; but we didn’t watch it in our youth / when it came out). The second is also, really, but at least it’s an awesome visual porn of sorts (what the 1st was also, back then, I imagine), one big Daft Punk music video (geniuses, really, tricking producers and so on into making something like this for them :> )
edit: oh yeah, two intelligent robots right there!
How do you know we are not merely “mimicking” it? (what tells you that you’re intelligent and concious in the first place?)
Plus “ever” is a very long time, compared to 30 years. At the very least, we know of nothing which would certainly prevent, on a fundamental level, a brain computer simulation – you can go further from there, streamlining and optimising functional blocks.
Please, no New Age stuff… :p But seriously, while we underestimate it somewhat, we also at least as often greatly overestimate it – about that “grasp “though” as we do”, you mean with tons of cognitive biases? (go through their list, this is the primary mode of our operation).
Edited 2012-06-25 08:35 UTC
The question is, do we even really WANT truly thinking and conscious machines?
Could we please lock down topics more often when someone doesn’t like to lose an argument?
Ouch… I despise locks too, but could we please not get *this* topic locked down too! As you can see I’m quite enjoying it
Edit: go bugger some microsoft threads
Edited 2012-06-25 03:43 UTC
Well, I can’t really reply or bring it up in the locked one though
Hmm…and now comments are disappearing.
Seriously, users can’t mod up and down in a topic they have commented in but this kind of immature nonsense is fine?
Really. Wow.
I assume you refer to the story with digital collages?
Now the whole story is just… gone. Yay for promotion of real art(tm)!
Edited 2012-06-27 17:03 UTC
Humanity is still so stupid anyway. It was the later half of the 20th century when Turing was charged for being homosexual and castrated, and that’s still quite hard to believe, not the 1500s, less than two decades before i was born and during my parent’s lifetime.
I think thanks to the internet, the world has become an enormously better place, where this kind of stupid attitudes by governments, media or other groups can be condemned publicly.
Personally i think that AI in the vein of a thinking machine will become real. I think that most of the work to AI will almost be accidental and that our requirement for a better standard of living will create AI accidentally.
What i mean by this is that a lot of companies are creating very basic AI to power games, traffic systems and more recently mobile/smart devices, i.e. siri. I believe that evolution of these products will have the secondary effect of consciousness. The other big factor is in this is the internet, which is having more and more information pumped into it, with more logic and ai being built into it also (google search). Now these two entities (smart devices and the internet) are already connected allowing for them to interact more and more with each other and more importantly humans, allowing us to rub more and more off onto them.
I currently believe we are a couple of steps away from AI, the next step will be a massive overhaul of the way we interact with computers. We are reaching that point, but we need to reach a point where we interact with computers of all types in the same way they do in star trek, i.e. natural voice with understandings of complex commands and language syntax. Now i believe this is important as we again it allows us to rub a little more off into computers.
As for creating an AI system from scratch to compete with turnings requirements, the problem we have is that although the electric circuits run slower in the human mind, the human mind is massively parallel processing, so we will need something akin, something with not just 256 cores but something with 1000’s of cores per chip. Watson demonstrated this with the 1000’s of processing units.
From a software point of view we need an operating system or software that has the ability to learn, to not load everything the computer AI needs to know straight away, but a system which helps the AI learn the information itself. For example language, instead of loading a dictionary we need software that would allow the system to learn the language step by step.
I suppose once we get to a point with AI, the real question will be how do we use it, set it free, integrate it into our society? For that i would recommend reading Asimov’s series of books, he’s a better ‘thinker’ than me.
…and a static GUI with very limited number of poorly differentiated touch-buttons, complex arbitrary combinations of which give you any desired action? Also, exploding when there’s something wrong at the nearest city block transformer?
BTW very-many-cores, we’re getting there:
http://en.wikipedia.org/wiki/Steve_Furber#Current_research_interest…
http://apt.cs.manchester.ac.uk/projects/SpiNNaker/
WRT integrating an AI into the society… I cringe what kind of “moral values” we’ll try to impose on it, the same way many humans try to do with other humans – while insisting on some silly superficialities, we might miss some really crucial ones, Asimov-style crucial*.
Because, see, since you focus on “emergent” AI – we have to take into account that the “true AI” might be, at the least, damn horny towards humans in some convoluted way (what with all the internet porn and viagra spam floating around).
* But his books might also seem a bit naive – around the time of their writing, we already had robots dedicated to killing humans, ICBMs being the most ultimate example (some even largely autonomously, like the Russian Perimetr aka Dead Hand system)