This is the saga of Hugh Loebner and his search for an intelligent bot has almost everything: Sex, lawsuits and feuding computer scientists. There’s only one thing missing: Smart machines. Read the first part and the second part at Salon.com.
This is the saga of Hugh Loebner and his search for an intelligent bot has almost everything: Sex, lawsuits and feuding computer scientists. There’s only one thing missing: Smart machines. Read the first part and the second part at Salon.com.
Very funny article. It’s entertaining to see the infighting and disagreements in the AI community. One interesting point that I gathered from the article is that it is very difficult to formulate a specific test that proclaims that AI has “arrived”. Specificially, how the turing test attempts to show that computers can “trick” a human into believing that it can understand.
My thoughts are that mimicry probably won’t culminate until a very large database of human information can be used that combines intelligent natural language parsing with querying the database to simulate a “real” conversation. The challenge will really be how to get all that data structured for use, much like how the brain can store the things that we’ve learned. But it seems like the most successful progress involves a “teacher” of the machine on how to respond and interact much like humans do. The article really illustrates that we are in the stone ages of the field. Interesting read though.
“My thoughts are that mimicry probably won’t culminate until a very large database of human information can be used that combines intelligent natural language parsing with querying the database to simulate a “real” conversation.”
You know, I’ve thought of this too but is intelligence finite? The answer is most definetly no, so some database of info would be infinitely expanding, and I don’t think that would help in making a computer intelligent.
And even then, there is a differnence between intelligence and developing a program that fools its user into thinking that it is intelligent. This, in my opinion, is not intelligence and not very interesting. But, languages are ambiguous, and it is impossibly difficult for a computer to figure out its meaning, especially considering how much language changes over time.
I begin to wonder if we will ever reach a time when a computer can be considered intelligent, whatever way that is actually determined is hard to figure out. Lots of applications inplement certain intelligence, but the intelligence is vastly limited to some specific task.
However, I do believe that Turing’s work is important for scientists to understand and consider.
This has always bothered me. People are trying to code AI. No matter how hard they try they will fail. Brains are not digital yes no machines. There is no hardware that thinks like our brains. Before you can code your AI you need to have something for it to run on that will process things like brains do. You simple cannot take something that works on “yes/no”, “on/off”, and get it to handle a “maybe”. We have the blueprints in our head on how to do it, we just need to figure out how to read them and build it.
Sice we are covering this, many of you may remember a famous early attempt at AI called “Eliza”.
From the page: Eliza is “a famous program that simulates a Rogerian psychoanalyst by taking excerpts from the subject’s comments and posing questions back to the subject”
This guy used applescripts to get it to answer AOL IM’s and posted the results on this webpage. Some of the results are funny.
http://fury.com/aoliza/
This page: http://www.a-i.com looks like an interesting project in the field of AI. I really look forward to the results of this one.
This has always bothered me. People are trying to code AI.
I think you have misunderstood the word AI, it doesn’t mean “to code a human brain”.
An example of AI can be to teach a team of robots to do dangerous mining or forrest work: how they should cooperate to maximize efficiency, how to recognize and respond to obstacles etc… This has nothing to do with the pseudo-AI crap like Eliza.
“””You simple cannot take something that works on “yes/no”, “on/off”, and get it to handle a “maybe”.”””
Ever hear of fuzzy logic? Or fuzzy set theory?
“You simple cannot take something that works on “yes/no”, “on/off”, and get it to handle a “maybe”.”
Maybe. Maybe not. The fundemental nodes in our neural net are binary.
A very good book to read is “The Cambridge Quintet”.
http://www.theregister.co.uk/content/6/21414.html is worth a read. Infact, search The Register for “Hawking”. Seems they have quite an expert in this kind of thing from T. C. Greene
“Ever hear of fuzzy logic? ”
Yep…my cats use it every day