Can a machine solve any problem that humans solve by thinking?
And, if so, how can we tell that it is really thinking, and not just pretending to think?
The basic position of most AI researchers on this is summed up in this statement, which appeared in the proposal for the Dartmouth Conferences of 1956: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."
But the first step to answering this question is to clearly define "intelligence", or what we refer to as "intelligence".
Alan Turing reduced the problem of defining intelligence to a simple question about conversation. He suggests that "if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent." A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program (or rather, artificial intelligence). The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks." Turing's test extends this polite convention to machines: If a machine acts as intelligently as human being, then it is as intelligent as a human being.
But the question here is if the machine is acting, and just pretending to think. As I am not a computer programmer, I am not able to answer this question, but I do extend suggestion: that computers only do what they are programmed to do, and, therefore, are they the ones with intelligence, or are the programmers the ones who are intelligent?
Marvin Minsky writes that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device." This argument is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. Few disagree that a brain simulation is possible in theory, even critics of AI such as Hubert Dreyfus and John Searle. However, Searle points out that, in principle, anything can be simulated by a computer, and so any process at all can be considered "computation", if you're willing to stretch the definition to the breaking point. "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. Any argument that involves simply copying a brain is an argument that admits that we know nothing about how intelligence works. "If we had to know how the brain worked to do AI, we wouldn't bother with AI," writes Searle.
Does this mean that, someday, we will be able to create fake people-with fake memories-who believe that they are people (which raises another question: can AIs have feelings?)? If someone dear to us dies, can we recreate them, so that they never died-yet they did, and they are not real human beings? I will be addressing these questions in my next post, "Can a Machine have a Mind, Conciousness, and Mental States?".