User:CharlesGillingham/More/Philosophy of artificial intelligence

Todo

 * 1) Add this quote: "It is unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science." John McCarthy What has AI in common with philosophy
 * 2) Add section on simulated intelligence vs. real intelligence. Link to synthetic intelligence.
 * 3) Add Church-Turing, and call Lucas a refutation of that. Use Von Neumann's quote that's in the timeline of artificial intelligence.
 * 4) Add Kurzweil references. Check Kurzweil's date that machines can simulate a brain.
 * 5) Give me a better "intelligent agent" definition
 * 6) Add my new "working definition"
 * 7) Mention winograd and flores.
 * 8) Find some legitimate criticism of brain simulation
 * 9) Mention that "strong AI" is hard to construe and that some think it has bearing on the Dartmouth proposal.
 * 10) Undo this change:
 * 11) Take notes on last chapter of Norvig and Russell; apply these here
 * 12) Check if Norvig & Russell refute Lucas in the same words as that anonymous editor on June 2 or 3rd. 2008, Philosophy of AI

Additional sources

 * This thing is awesome!
 * This thing is awesome!
 * This thing is awesome!
 * This thing is awesome!

Anything can be simulated in theory
The essence of computation is simulation. Berlinksy, Hofstadter, can Turing's section 2-4 work here?
 * Church Turing here

Limits of symbol processing
One paragraph on each of these objections.

Responses to the limits of symbol processing
If human thinking is not, in fact, a kind of symbol processing, there are two possible responses. The first emphasizes that it's not necessary to imitate human symbol manipulation, so it doesn't matter if humans don't use symbols. The second emphasizes that it's not necessary imitate human symbol manipulation, since we can also imitate our unconscious, non-symbolic reasoning.

Some, like John McCarthy believe that artificial intelligence doesn't need to use to the same algorithms that people do. "Artificial intelligence is not, by definition, simulation of human intelligence" writes McCarthy. Russell and Norvig was it them? suggest an analogy with the early days of heavier than air flight: the problem could not be solved while researchers still insisted on imitating birds. McCarthy even goes as far as to argue that machines can successfully use symbols even if people do not. QUOTE MCCARTHY

Others, focus on solving the problems we solve unconsciously, like perception, motion, and judgements involving uncertainty. Robotics researchers like Hans Moravec and Rodney Brooks were among the first to realize that unconscious skills would prove to be the most difficult to reverse engineer. (See Moravec's paradox). The most promising directions in research into unconscious skills include neural networks, fuzzy systems are often collected under the subfield of computational intelligence.

These two options satisfy most of those who work in the field. According to Crevier, "most AI researchers ... do not make the psychological assumption."

Wiezenbaum's Observation
"I don't see any way to put a limitation ot the degeree of intelligence that [a machine] could acquire. The only qulaification I make, and I can't understand why it's resisted, is that the intelligence that will develop in this way will always be alien to human intelligence. It will be at least as different as the intelligence of a dolphin is to that of a human being." Joseph Weizenbaum, quoted in Crevier p. 266

= List of positions in the philosophy of artificial intelligence =

Can a machine display general intelligence?

 * The sufficient condition of the PSS hypothesis, for humans: A physical symbol system can act as intelligently as human being
 * The argument from Intractability - Humans can guess the answers to problems that are intractable for machines.

Is the human brain a kind of computer?

 * The necessary condition of the PSS hypothesis, for humans: A physical symbol system can act as intelligently as human being
 * Mechanism (associated with Hobbes) - The human brain is a machine.

Can a machine have a mind in the same sense people do?
Positions from other fields that bear on the philosophy of AI are:
 * Dualism (associated with Descartes) - The human brain and the human mind are two different substances.
 * Searle's Materialism: Brains cause minds
 * Functionalism - A mental state is any intermediate causal condition between input and output.
 * Biological naturalism - ''High level emergent features are caused by low level neurological processes in the neurons.
 * The principle of evolutionary psychology - It is unlikely that any aspect of human brains does not increase the chances that humans will reproduce.

MORE (SORT US)

 * Anthropomorphic AI - To be as intelligent as a machine, a machine must use the same algorithm as a brain.
 * Naive Emergentism - 'A "mind" may arise out of a sufficiently complex system that has not been specificially designed with the properties of a mind.
 * Brain copy thesis - It is possible to build a machine that duplicates the functions of each neuron in a human brain.
 * Turing's Argument from Disability - a machine can never do "X"