Talk:Explanatory gap

Is the explanatory gap an argument at all?
I doubt that the explanatory gap can be seen as an argument at all, because to put it bluntly it can be summed up to "We are all just too stupid to understand why I am right!". With that kind of "argument" you can "proof" anything, even the greatest nonsense. Even worse from a mathematical/logical point of view a statement which can proof anything right is a false statement.

For example I could argue "All Chinese people are in reality Americans" and if anybody would ask me why I would just answer: "All Chinese people are in reality Americans" because -> "explanatory gap" -> true!

Mat11001 (talk) 19:50, 23 October 2011 (UTC)


 * I would say the explanatory gap is not an argument (just like dark energy/matter are not arguments), rather, it is just a name for (one technical component of) a still unsolved problem.


 * But that's as far as I agree with you, I don't find the topic contentless. The topic is a way of stating the notion that it is difficult to conceive of any way (even in principle) in which 'the conscious experience of first-person awareness' could arise from any mechanistic system (which is different from simply not knowing how our particular consciousness arises from our particular neuroarchitecture). It's analogous to the notion that it is difficult to conceive of any way that absolute moral laws could be derived from objective scientific facts, which is often attributed to Hume. Sam Harris recently expressed it this way: even if you trace a spectrum of simpler nervous systems than humans, toward the mind with the smallest glimmer of consciousness, it is still no easier to understand how such consciousness can arise from a mechanistic system (to wit, how can you decide whether a thermostat is conscious? This seems like a failure of reductionist techniques).


 * It is comparatively very easy to explain how mechanistic processes give rise to the functional behaviour of a robot, a zombie, or another person, including their verbal behaviour such as speech claiming to assert their own consciousness. But we don't even know how to begin to imagine describing a mechanistic explanation for someone's own actual subjective consciousness. Similarly, we don't have the first clue about how to authenticate whether a system really is conscious or else is not conscious but just trickily arranged to act in false pretense as if it were. Of course, different philosophers may disagree, but that's just more evidence that the topic is not as contentless as you suggest. Cesiumfrog (talk) 00:50, 24 October 2011 (UTC)


 * "But we don't even know how to begin to imagine describing a mechanistic explanation for someone's own actual subjective consciousness." That's exactly the point. But instead of accepting that their theory failed the materialists come now up with the "explanatory gap" to "proof" they are right despite they are wrong. That's the whole purpose of the thing. Dualists don't need the "explanatory gap" because they don't assume that the material world is the only entity that exists at all.
 * Mat11001 (talk) 20:15, 26 October 2011 (UTC)


 * I think your problem is just that you mistakenly assumed the explanatory gap was an argument for materialism. It's not. In fact, its existence is indeed used as an argument for the views of (quasi-) dualists such as Searle and Chalmers. (Nonetheless, the topic is normally couched in the language of materialism because that is by far the more dominant paradigm among experts. Note that in the article we can not be imposing our own conclusions about who may be wrong.) Cesiumfrog (talk) 22:58, 26 October 2011 (UTC)

Free will v consciousness
Hm. This article appears to confuse free will with consciousness. -- The Anome

Searle's chinese room argument proposes that information processors are just a collection of bits and that the encoded information on the patterns of bits has no meaning to the machine. The question facing proponents of strong AI is how a collection of clusters of electrons on a chip could know anything at any instant. The preferred answers are emergentism and direct realism, neither of which are very convincing.

Criticism
This article lacks a section on criticism of the explanatory gap. I don't know it well enough, but Dennett has a counter-argument in Consciousness Explained, and Patricia Smith Churchland's Brain-Wise: Studies in Neurophilosophy has another.Hinakana (talk)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Explanatory gap. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20110514060312/http://www.imprint.co.uk/hardprob.html to http://www.imprint.co.uk/hardprob.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 02:28, 9 December 2017 (UTC)

Objection to statement

“To take an example of a phenomenon in which there is no gap, imagine a modern computer: as marvelous as these devices are, their behavior can be fully explained by their circuitry.”

I vehemently object to this statement. It is totally false and should be removed.

The circuitry of a computer explains nothing of its ‘behavior’. The circuitry may include a camera, yet it cannot ‘see’; a microphone, yet it cannot ‘hear’; a touch screen, yet it cannot ‘feel’; a keyboard and printer, yet it knows not of ‘language’; a CD-ROM, yet it cannot ‘read’; a hard drive, yet it cannot ‘write’; memory, yet it cannot ‘remember’; a network interface, WiFi, or Bluetooth, yet it is not ‘social’. A computer described only by its circuitry is the quintessential ‘Tabula Rosa’. It has no ‘behavior’, it is a paper weight and it matters not whether the power switch is ON or OFF. It has no behavior unless and until it is given a PROGRAM. A program brings its senses to life at which time it may begin to collect data. At this point the ‘behavior’ of the computer is equal to that of an imbecile.

In order for the computer to have ‘meaningful’ behavior the ‘program’ must have a ‘purpose.’ Taken together, the behavior of a computer may now be fully explained – circuitry, program, data, and purpose.

I don’t see this as being much different than general problem being discussed here. The brain is relational to the computer circuitry, although in the case of the brain, the circuitry of the brain is itself fluid, changeable. What seems to be missing in all the arguments for hundreds of years is the concept of ‘program.’ In the case of animals, all animals, the program is called ‘consciousness’.

Consciousness, as a program, is itself fluid, changeable. It is malleable and adaptable based on the ‘data’ received by the senses. Consciousness also has ‘purpose.’ Some of those ‘purposes’ are built in, such as the ‘purpose’ of self preservation or procreation. Some ‘purposes’ are the direct result of cultural influences. Some ‘purposes’ are selectable by the individual which is called ‘free will.’

The mind is a brain in action; it is what a brain does. — Preceding unsigned comment added by MFelixHunter (talk • contribs) 15:42, 20 January 2018 (UTC)