Talk:GOFAI/Draft

In the philosophy of artificial intelligence, GOFAI is classical, symbolic AI, as opposed to other approaches, such as neural networks, evolutionary programming and situated robotics. It is an acronym for "Good Old-Fashioned Artificial Intelligence" invented by the philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.

Haugeland's GOFAI
Haugeland coined the term GOFAI in order to examine the philosophical implications of “the claims essential to all GOFAI theories”, which he listed as: "1. our ability to deal with things intelligently is due to our capacity to think about them reasonably (including sub-conscious thinking); and 2. our capacity to think about things reasonably amounts to a faculty for internal “automatic” symbol manipulation"

This is very similar to the sufficient side of the physical symbol systems hypothesis proposed by Herbert A. Simon and Allen Newell in 1963: ""A physical symbol system has the necessary and sufficient means for general intelligent action.""

It is also similar to Hubert Dreyfus' "psychological assumption": ""The mind can be viewed as a device operating on bits of information according to formal rules. ""

These positions refer to any kind "symbol manipulation" governed by formal rules (i.e. the set of instructions for manipulating the symbols). The "symbols" they refer to are high-level discrete objects that are assigned a definite syntax -- they do not refer to signals, or unidentified numbers, or networks of numbers, or the zeros and ones of digital machinery. Thus, Haugeland's GOFAI does not refer to "good old fashioned" techniques such as cybernetics, perceptrons or control theory or modern techniques such as neural networks or support vector machines.

These philosophical positions all claim that symbolic AI is sufficient for true intelligence -- that nothing else is required to create fully intelligent machines. Thus "GOFAI", for Haugeland, does not refer to systems that combine symbolic AI with other techniques, such as neuro-symbolic AI, and also does not refer to narrow symbolic AI systems that are designed only to solve a specific problem and are not expected to exhibit general intelligence.

Critics of Haugeland such as Drew McDermott argued that AI researchers did not, in general, make these assumptions, and that Haugeland's GOFAI was a "myth". By the 1990s, many AI researchers would agree that symbol manipulation was probably insufficient to create human-level AI, and that other techniques, such as deep learning, would also be required.