User:Phlsph7/Mind - Non-human

Animal
While it is generally accepted today that animals have some form of mind, it is controversial to which animals this applies and how their mind differs from the human mind. Different conceptions of the mind lead to different responses to this problem; when understood in a very wide sense as the capacity to process information, the mind is present in all forms of life, including insects, plants, and individual cells; on the other side of the spectrum are views that deny the existence of mentality in most or all non-human animals based on the idea that they lack key mental capacities, like abstract rationality and symbolic language. The status of animal minds is highly relevant to the field of ethics since it affects the treatment of animals, including the topic of animal rights.

Discontinuity views state that the minds of non-human animals are fundamentally different from human minds and often point to higher mental faculties, like thinking, reasoning, and decision-making based on beliefs and desires. This outlook is reflected in the traditionally influential position of defining humans as "rational animals" as opposed to all other animals. Continuity views, by contrast, emphasize similarities and see the increased human mental capacities as a matter of degree rather than kind. Central considerations for this position are the shared evolutionary origin, organic similarities on the level of brain and nervous system, and observable behavior, ranging from problem-solving skills, animal communication, and reactions to and expressions of pain and pleasure. Of particular importance are the questions of consciousness and sentience, that is, to what extent non-human animals have a subjective experience of the world and are capable of suffering and feeling joy.

Artificial


Some of the difficulties of assessing animal minds are also reflected in the topic of artificial minds, that is, the question of whether computer systems implementing artificial intelligence should be considered a form of mind. This idea is consistent with some theories of the nature of mind, such as functionalism and its idea that mental concepts describe functional roles, which are implemented by biological brains but could in principle also be implemented by artificial devices. The Turing test is a traditionally influential procedure to test artificial intelligence: a person exchanges messages with two parties, one of them a human and the other a computer. The computer passes the test if it is not possible to reliably tell which party is the human and which one is the computer. While there are computer programs today that may pass the Turing test, this alone is usually not accepted as conclusive proof of mindedness. For other aspects of mind, it is more controversial whether computers can, in principle, implement them, such as desires, feelings, consciousness, and free will.

This problem is often discussed through the contrast between weak and strong artificial intelligence. Weak or narrow artificial intelligence is limited to specific mental capacities or functions. It focuses on a particular task or a narrow set of tasks, like automatic driving, speech recognition, or theorem proving. The goal of strong AI, also termed artificial general intelligence, is to create a complete artificial person that has all the mental capacities of humans, including consciousness, emotion, and reason. It is controversial whether strong AI is possible; influential arguments against it include John Searle's Chinese Room Argument and Hubert Dreyfus's critique based on Heideggerian philosophy.