User:Veritas Aeterna/GOFAI Draft

GOFAI is an acronym for "Good Old-Fashioned Artificial Intelligence" invented by the philosopher John Haugeland in his 1985 book, Artificial Intelligence: The Very Idea. GOFAI refers only to a restricted kind of symbolic AI, namely rule-based or logical agents. This approach was popular in the 1980s, but symbolic AI has since been extended to include many newer techniques better suited to handling uncertain reasoning and open-ended systems, such as probabilistic reasoning, non-monotonic reasoning, multi-agent systems, and neuro-symbolic systems. Thus, GOFAI is not the same as symbolic AI, although the two are often conflated, as explained in the section on Media Conflation of Terms.

The GOFAI critique of rule-based agents
GOFAI, the rule-based approach of 1980s symbolic AI, was attacked by philosophers such as Hubert Dreyfus, his brother Stuart Dreyfus, and philosopher Kenneth Sayre. The essence of what they criticized was described earlier by computer scientist Alan Turing, in his 1950 paper Computing Machinery and Intelligence, when he said that "human behavior is far too complex to be captured by any formal set of rules—humans must be using some informal guidelines that … could never be captured in a formal set of rules and thus could never be codified in a computer program." Turing called this "The Argument from Informality of Behaviour."

Russell and Norvig, describe the GOFAI critique in Artificial Intelligence: A Modern Approach: The position they criticize came to be called "Good Old-Fashioned Al," or GOFAI, a term coined by Haugeland (1985). GOFAI is supposed to claim that all intelligent behavior can be captured by a system that reasons logically from a set of facts and rules describing the domain. It therefore corresponds to the simplest logical agent described in Chapter 7. Dreyfus is correct in saying that logical agents are vulnerable to the qualification problem. As we saw in Chapter 13, probabilistic reasoning systems are more appropriate for open-ended domains. The Dreyfus critique therefore is not addressed against computers per se, but rather against one particular way of programming them. It is reasonable to suppose, however, that a book called What First-Order Logical Rule-Based Systems Without Learning Can't Do might have had less impact. In other words, GOFAI restricts its view of agents to those controlled by logical rules. In contrast to this view, symbolic AI also includes non-monotonic logic, modal logic, probabilistic logics, multi-agent systems, symbolic machine learning, and hybrid neuro-symbolic architectures. Symbolic machine learning, i.e., non-connectionist machine learning specific to symbolic AI, includes inductive logic programming, statistical relational learning, case-based learning, knowledge compilation (chunking), macro-operator learning , learning from analogy, and interactive learning from human advice, explanations, and exemplars.

The GOFAI critique of disembodied agents
Russell and Norvig do not reject all of Dreyfus’s arguments, they accept his strongest argument, one that applies to all disembodied AIs, whatever their approach: One of Dreyfus's strongest arguments is for situated agents rather than disembodied logical inference engines. An agent whose understanding of "dog" comes only from a limited set of logical sentences such as "Dog(x) = Mammal(x)" is at a disadvantage compared to an agent that has watched dogs run, has played fetch with them, and has been licked by one. As philosopher Andy Clark (1998) says, "Biological brains are first and foremost the control systems for biological bodies. Biological bodies move and act in rich real-world surroundings: According to Clark, we are "good at frisbee, bad at logic."

The embodied cognition approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its environment, including the rest of its body. Under the embodied cognition approach, robotics, vision, and other sensors become central, not peripheral.

Media Conflation of Terms
Frequently, in media publications focusing on deep learning, symbolic AI has been caricatured as being the same as GOFAI, which in turn is viewed as nothing more than hand-crafted production rules. Here is an example of this kind of common confusion: "Machine learning—one type of AI—is responsible for those successes and failures. Broadly, AI has moved from software that relies on many programmed rules (also known as Good Old-Fashioned AI, or GOFAI) to systems that learn through trial and error. Machine learning has taken off thanks to powerful computers, big data, and advances in algorithms called neural networks. Those networks are collections of simple computing elements, loosely modeled on neurons in the brain, that create stronger or weaker links as they ingest training data."

Furthermore, deep learning and artificial intelligence are also often conflated as we can see in the example above and in this example from the New York Times: "Artificial intelligence — in which machines are trained to perform jobs and make decisions on their own by studying huge volumes of data — is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies, promising major gains in productivity."

Garcez and Lamb further provide examples of academics conflating terms in their paper, "Neurosymbolic AI: The 3rd Wave":"Turing award winner Judea Pearl offers a critique of machine learning which, unfortunately, conflates the terms machine learning and deep learning. Similarly, when Geoffrey Hinton refers to symbolic AI, the connotation of the term tends to be that of expert systems dispossessed of any ability to learn. The use of the terminology is in need of clarification. Machine learning is not confined to association rule mining, c.f. the body of work on symbolic ML [machine learning] and relational learning (the differences to deep learning being the choice of representation, localist logical rather than distributed, and the non-use of gradient-based learning algorithms). Equally, symbolic AI is not just about production rules written by hand. A proper definition of AI concerns knowledge representation and reasoning, autonomous multi-agent systems, planning and argumentation, as well as learning."