User talk:CharlesGillingham/TEMP 6

"AI is whatever hasn't been done yet" is Doug Hofstadter's misquotation of what he called "Tesler's Theorem". What I actually said was, "Intelligence is whatever hasn't been done yet". Sometimes I have phrased it as "Intelligence is what computers can't do yet".

I think my formulation is more accurate and more profound than Doug's.

The point is not that AI professionals lose interest in working on automating a mental skill once it has been shown to be possible to do so. Just the opposite. Once a technique has been devised, a lot of return can be had for a small additional investment.

The point is that the popular definition of intelligence changes to exclude any mental skill that was once though exclusive to humans. Machines are dumb, just ones and zeroes, people say. The AI programmer was the intelligent actor, not the program.

Some people even redefine intelligence to exclude mental skills that animals are shown to exhibit. It is reverse anthropomorphism. An anthropomorphic interpretation of clever machine or animal behavior says "outwardly, it looks like what people do, so they must be intelligent like people." A reverse anthropomorphic interpretation says, "outwardly, it looks like what people do, so when people do it, it can't be intelligence at work after all." Anything that a non-human can do must be "mechanical" or "instinctive" or "reflexive", not "intelligent".

Many people consider intelligence to be the defining attribute of our species. For them, it is an axiom that "Intelligence is that set of mental skills that only homo sapiens possess." It is from this axiom that Tesler's Theorem derives.

But I sense that this attitude about intelligence is changing in our society. If the attitude virtually disappears, the axiom will no longer be tautological and Tesler's Theorem will be obsolete. Larry Tesler (talk) 20:29, 6 November 2009 (UTC)