This cynically clever quote is known as Tesler's Theorem, and it reflects the difficulty in convincing a human that a machine is intelligent. Chess is seen as a game requiring intelligence, but as computers gained dominance it was argued that they were just using "brute force" and not really "thinking" the same way we do. So when Deep Blue beat Kasparov, it somehow didn't count - computers were still not intelligent, just really fast.
Is there anything that will overcome this skepticism? The Turing Test seems the most likely candidate, as it is expressly designed to "fool" a human into sympathy - but once the computer is revealed many people will still claim that it is just a clever facsimile that doesn't really "feel" the words it says.
Our desire to refute artificial intelligence has deep philosophical and psychological roots - acknowledging intelligence created by our own hand raises many questions about our own nature and origins. And like many things, it would also challenge traditional religious views - would these AIs have "souls"? If not, then do we?
But my goal is not to confront these questions - intelligence is multifaceted, and despite human skepticism it is clear that computers have already achieved a great deal of intelligence in many areas. My question is not what is intelligence nor why exactly people don't want computers to have it, but rather what area has the most potential for AI development.
And, as the title suggests, I would offer language as that area. More precisely, natural language processing - having the computer "just do" what you tell it to - is both very interesting and also obviously practical. This may seem like a "Star Trek" concept, but we actually already have made a lot of progress with it.
For example, consider search engines - they have evolved from simple lookup mechanisms to genuine attempts to parse language. I must confess mixed feelings on this - as search engines increasingly target poorly formed queries, I find my own queries are sometimes "corrected" in an erroneous fashion. But the concept is a sound one - the search engine should truly understand your intent, and not just the literal meaning of your query.
Another example is recent progress with Mathematica. The latest release of Mathematica supports "free-form linguistics" - that is, "programming" with natural language (examples, more examples). I must again confess mixed feelings, in this case not because of false corrections but because Mathematica is a stalwart of closed source and overpriced software. I suppose search engines are also essentially closed source, but Mathematica in particular is depressing because so many of its peers are open.
But despite its proprietary nature, I must admit that Mathematica is an impressive piece of work. Eventually I expect many high level programming languages to support this sort of "fuzzy" syntax, where there are multiple ways to specify things and the interpreter makes assumptions based on context. Of course it will make mistakes, but it's not like programming is free of debugging today.
It will be interesting to see where these efforts go. I hope that eventually the sort of sophisticated language processing we are seeing becomes more open than it is now. And I expect that, once computers can be interacted with as naturally as humans, the great AI disputes and philosophical questions will fall away as people just start anthropomorphizing and treating their computer like a person. People want to treat things they interact with as humans, even if they don't want to think about the philosophical ramifications (see: pets, Disney movies, etc.).
This post has mostly been just a dump of interesting resources, so I may as well plug one more: Gödel, Escher, Bach: An Eternal Golden Braid
Thanks for reading.
No comments:
Post a Comment