Talk:Chinese room/Archive 5

Eliminated prevarication in logical expressions
I have made some minor revisions in the description of the experiment to eliminate the prevarication present in the descriptions of the logical expressions there. For instance, it is unnecessary, when substituting a value for a variable in an expression that is otherwise and generally held to be true, to add words to infer that the expression concerned may thereby become untrue (for reasons unknown). If the expression is true then using it with different values does not change that. IF the expression "People have hearts" is true, THEN substituting the value "Mary" or John" results in a true expression. It is redundant, and implies a fundamental but unreasoned doubt, to embellish the result into the form: "People have hearts and so the doctor argues that Mary has a heart". It is a true consequence of people having hearts that Mary also has a heart, unless e.g. Mary is a ship, has had her heart removed, etc. --LookingGlass (talk) 14:01, 4 December 2012 (UTC)

In popular culture
The following got deleted from the article back in December 2010. This should be undone. Additionally the section should mention that in Daniel Cockburn's 2010 film You Are Here Searle's Chinese room argument is enacted on screen. (source e.g. http://www.filmcomment.com/article/you-are-here) — Preceding unsigned comment added by 84.41.34.154 (talk • contribs)
 * An episode of the TV show Numb3rs uses the Chinese room as a model in the episode "Chinese Box".
 * Searle's Chinese room is referenced in the novel Blindsight by Peter Watts by characters dealing with an alien entity that is intelligent but lacks consciousness.
 * These don't sound significant enough to warrant a mention; see wp:UNDUE: "An article should not give undue weight to any aspects of the subject but should strive to treat each aspect with a weight appropriate to its significance to the subject. For example, discussion of isolated events, criticisms, or news reports about a subject may be verifiable and NPOV, but still be disproportionate to their overall significance to the article topic." Are any of these really important to the reader's understanding of the subject? ErikHaugen (talk &#124; contribs) 05:56, 16 July 2012 (UTC)
 * WP:UNDUE is about proportion and relative weight. Since this is a very long article and quite mature, it seems the most significant aspects of the subject may already be well covered.  I'm not vouching for those particular pop culture examples above, but I think a few sentences on the Chinese room usage in pop culture would not overwhelm the significant weight already in the article.  $0.25.  --Ds13 (talk) 06:21, 16 July 2012 (UTC)
 * Well, maybe. Look at lion, for example—lions are pretty significant in culture, and cultural depictions/etc are important to understanding the topic. Maybe that is true of this subject as well and these examples are good ones that demonstrate that. I really don't think so, though. ErikHaugen (talk &#124; contribs) 06:39, 16 July 2012 (UTC)

Many mansions - missing?!
Where's the discussion of the many mansions reply, which is probably the most famous counter-argument to the Chinese room? Raul654 (talk) 07:45, 15 April 2011 (UTC)
 * While I agree that the reply deserves to be discussed, it is not my impression that it is the most famous counter-argument. Can you back up that assertion?  (I believe the "systems" reply is the most popular.) Looie496 (talk) 16:44, 15 April 2011 (UTC)
 * Well, I've heard of that (mansions) one and not the others, so I guess I was just assuming it's the most famous. But either way, it does deserve to be discussed. Raul654 (talk) 16:48, 15 April 2011 (UTC)


 * The "Many Mansions" reply should be covered.


 * In my view, it belongs in the section currently called redesigning the room, because Searle's counter-argument is the same: he says that the reply abandons "Strong AI" as he has defined it. The Chinese Room argument only applies to digital machines manipulating formal symbols. He grants that there may some other means of producing intelligence and consciousness in a machine. He's only arguing that "symbol manipulation" won't do it. CharlesGillingham (talk) 06:35, 5 June 2011 (UTC)


 * (sometime ago) CharlesGillingham (talk) 19:01, 5 September 2013 (UTC)

Compare "Chinese room" to search results of "optimal classification"
Do a Google search on "optimal classification". Then compare results to understand limits, if any. — Preceding unsigned comment added by 71.99.179.45 (talk) 03:39, 3 November 2013 (UTC)
 * Um, what's your point? Looie496 (talk) 15:07, 3 November 2013 (UTC)

Actual explanation of Chinese room in first paragraph seems wrong/makes no sense
I am not a philosopher or philosophy student so I am hesitant to make this change on my own, but I could make no sense at all of these sentences: "It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, . . ." (after that I understand). A search brought me to the Stanford Encyclopedia of Philosophy page, which has a far clearer and substantially different explanation that does not involve a computer program or someone executing a computer program's instructions "by hand."

Again, I am so far from being an expert on this I can't bring myself to make the change, but I hope someone who knows what they are doing will consider revising for correctness and clarity. Kcatmull (talk) 20:21, 24 July 2013 (UTC)
 * Can you elaborate on what part of this is unclear? ErikHaugen (talk &#124; contribs) 06:59, 29 July 2013 (UTC)
 * The explanation assumes that the reader knows that a "program" is a series of instructions - this could be worded more clearly, I'll take a stab at it. --McGeddon (talk) 09:28, 29 July 2013 (UTC)
 * No, that's not quite it, sorry I was not clear. I think the explanation as written brings computers in too early in the explanation. Here's how the explanation at Stanford (http://plato.stanford.edu/entries/chinese-room/ ) begins: "The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers . . ." etc. So the thought experiment (as I understand it) is comparing a computer to a PERSON sitting alone in a room with a set of instructions, etc. But the first sentence of this article mentions a COMPUTER being given such a set of instructions, which is not the thought experiment; and then the second sentence confusingly mentions a person ("someone"), so the whole thing is a hopeless tangle. I think better to describe the thought experiment first -- English-speaking person sitting in a room with instructions: do they actually speak Chinese?--and then show that this is (or isn't) the situation of computers. Does that make more sense?   Kcatmull (talk)
 * This is tricky stuff, and it's easy when trying to improve wording to actually make it worse. Could you propose a specific wording that could be substituted for the original? Looie496 (talk) 16:24, 5 August 2013 (UTC)
 * I think there is a problem here. Was the lead always like this? Sadly, S himself sometimes seems to start off in this way. cf. Minds, Brains, and Science: "Imagine that a bunch of computer programmers ..." Then, later, "Well, imagine that you are locked in a room ...". I think we should have a go at it. Myrvin (talk) 19:03, 5 August 2013 (UTC)
 * However, Minds, Brains and Programs has: "the following Gedankenexperiment. Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken". So that doesn't start with the computer program. Myrvin (talk) 20:23, 5 August 2013 (UTC)
 * I suggest that the first part says what the argument is supposed to do. The next should have the person in the room. Then introduce the analogy with a computer program Myrvin (talk) 20:27, 5 August 2013 (UTC)
 * It could begin: The Chinese room is a thought experiment presented by John Searle in order to challenge the concept of strong AI, that a machine could successfully perform any intellectual task that a human being can. Searle writes in his first explication: "Suppose that I'm locked in a room and ... that I know no Chinese, either written or spoken". He further supposes that he has a set of rules in English that "enable me to correlate one set of formal symbols with another set of formal symbols." These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions - who do understand Chinese - are convinced that Searle can actually read and write Chinese, even though he cannot. Myrvin (talk) 20:52, 5 August 2013 (UTC)


 * The definition of "strong AI" above is not quite right. "Strong AI" is the idea that a suitably programmed computer could have a mind and consciousness in the same sense human beings do. Note that it is possible for a computer to perform any intellectual task without having a mind or consciousness. CharlesGillingham (talk) 21:51, 5 August 2013 (UTC)


 * Actually neither of those is quite right. Strong AI as Searle defines it is the claim that mind and consciousness are merely matters of executing the right programs.  That isn't the same as being a computer, because computers can do more than execute programs -- for example, they can cause pixels to light up on video screens. Looie496 (talk) 05:49, 6 August 2013 (UTC)
 * I stole the words, verbatim, from the strong AI article. We can agree on something better. I suppose Searle's def should be there since that's what he's contesting. Myrvin (talk) 06:53, 6 August 2013 (UTC)
 * How about:"The Chinese room is a thought experiment presented by John Searle in order to challenge the claims of strong AI (strong artificial intelligence). According to Searle, when referring to a computer running a program intended to simulate human ability to understand stories: 'Partisans of strong Al claim that the machine is not only simulating a human ability but also (1) that the machine can literally be said to understand the story and provide the answers to questions, and (2) that what the machine and its program do explains the human ability to understand the story and answer questions about it.' In order to contest this view, Searle writes in his first description of the argument: 'Suppose that I'm locked in a room and ... that I know no Chinese, either written or spoken'. He further supposes that he has a set of rules in English that 'enable me to correlate one set of formal symbols with another set of formal symbols,' that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions - who do understand Chinese - are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in written Chinese, the computer executing the program would not understand the conversation either. Myrvin (talk) 13:16, 6 August 2013 (UTC)"
 * That seems fine to me, though I found Myrvin's first shot at it (above) quite a bit clearer. But this is a huge improvement over what's there now. I hope you make the change! And thank you! Kcatmull (talk) —Preceding undated comment added 20:08, 6 August 2013 (UTC)
 * I filled in a correct definition of Searle's strong AI, and removed the link to Kurzweill's strong AI. It is impossible to understand Searle's argument if you don't know what he's arguing against.


 * One more thought: I think that the key to understanding the argument is to let go of the anthropomorphic assumption that "consciousness" and "intelligent behavior" are the same thing. So it's important that we keep these separated for the reader wherever possible, and thus the distinction between Kurzweill's strong AI and Searle's "strong AI hypothesis" is essential. CharlesGillingham (talk) 18:38, 11 August 2013 (UTC)

I would like to see something in the article that explains or challenges searle's "similarly", because this presupposes that he (or whomever is the person in the room later replaced by a computer) has no curiosity about the patterns he is observing amongst the symbols either presented to him or arranged by him, nor does it explain how the person (again, or the machine) might begin to recognise patterns & begin to get creative with the rules. I'm pretty sure that there's some evidence somewhere of this being how the human mind actually functions; that starting with basic sets of rules (such as language, social behaviour & so forth), the mind begins to recognise patterns & then to extrapolate, interpolate, & that this is what we call personality. I'm also quite sure- again, I'd need to dig a little deeper into 'new scientist' & 'wired' back-issues!- that we already have computer systems that are similarly capable of what we would anthropomorphically call 'learning' or 'adaptation'. this all is slightly away from searle's original thought experiment, but I think his thought-experiment dodges the business of either the person or the computer being capable of adaptive behaviour, that this adaptive behaviour itself may be evidence of something beyond a simple mechanism in the room, & that somewhere out there, there is or has been exactly this challenge levelled at searle's experiment. I'm going to keep looking. duncanrmi (talk) 22:18, 17 April 2014 (UTC)

Need to say that some people think the Chinese room is not Turing complete
See the discussion above, at #Unable to learn. To recap: I am pretty confident that Searle would say that the argument only makes sense if the room is Turing complete. But we need to research this and nail it down, because there are replies in the literature that assume it is not. I think this belongs in a footnote to the section Computers vs. machines vs. brains. CharlesGillingham (talk) 21:37, 10 February 2011 (UTC)


 * I found one that is unambiguous: Hanoch Ben-Yami (1993). A Note on the Chinese Room. Synthese 95 (2):169-72: "such a room is impossible: the man won't be able to respond correctly to questions like 'What is the time'?" Ben-Yami's critique explicitly assumes that the rule-book is fixed, i.e. there is no eraser, i.e. the room is not Turing complete. CharlesGillingham (talk) 18:30, 11 December 2011 (UTC)


 * The Chinese Room can't possibly be Turing complete, because Turing completeness requires an unlimited tape. The Chinese Room would be a finite-state machine, not a Turing machine. Looie496 (talk) 18:56, 11 December 2011 (UTC)


 * Ah, yikes, yes of course that's true. Note however that no computer in the real world has an infinite amount of tape either, so Turing complete machines can not exist. The article sneaks past this point. We could add a few more clauses of the form "given enough memory and time" to the article where this point is a problem. Or we could dive in with a paragraph dealing with Infinitely Large machines vs. Arbitrarily Large machines. Is this a hair worth splitting? CharlesGillingham (talk) 19:13, 11 December 2011 (UTC)

Looie: I think I'm starting to agree with you that the "Turing completeness" paragraph needs to be softened and tied closer to Searle's published comments. Ideally, I'd like to be able to quote Searle that the Chinese room "implements" a Turing machine or that the man in the room is "acting as" a Turing machine or something to that effect. Then we can make the point about Turing completeness in a footnote or as part of the discussion in "Redesigning the room", where it's actually relevant.

The only thing I have at this point is this: "Now when I first read [Fodor's criticism], I really didn't know whether to laugh or cry for Alan Turing, because, remember, this is Turing's definition that we're using here and what Fodor in effect is saying is that Turing doesn't know what a Turing machine is, and that's a very nervy thing to say." This isn't exactly on point, and worse, it isn't even published; it's just a tape recording of something he said. CharlesGillingham (talk) 22:29, 25 February 2012 (UTC)


 * Forgive me but this discussion seems to me to entirely miss the point i.e. that the Chinese Room is a thought experiment. The practical issues of a real experiment only impact on a thought experiment if they somehow change some significant element of it not if they relate to practical issues.  If it is factually incorrect to suggest that the turing machine test fails then reference to the turing test should be eliminated as it is insignificant to the thought experiment itself irrespective of whether or not Searle included it in his description.  A thought experiment, at every step, includes the words such as: "theoretically speaking".  In the Chinese room it is irrelevant whether the person does or does not have an eraser as the point is simply that a human could, "theoretically speaking",  execute the instructions that a computer would by following the same program.  It would simply take the human being far longer to do (so long that the person may die before they have sufficient time to answer even one question, but again this is irelevant to the experiment). LookingGlass (talk) 13:29, 4 December 2012 (UTC)


 * I don't think I fully understand your note, LookingGlass, so I don't want to put words in your mouth, but you appear to be conflating the "Turing test" with "Turing machines". They are completely different things. It is unfortunate, I guess, that they are both relevant to this subject, because it is easy to make this mistake. There is no such thing as "Turing machine test" as far as I am aware. The point in this section is whether the man-room system is a Turing-complete system, ie equivalent to a Turing machine. This section has nothing to do with the Turing test, as far as I can tell. The eraser is necessary for the system to be Turing-complete. Whether the man has an eraser is hugely important in some sense at least—if the man did not have an eraser then I don't think any computationalist would dare assert that the system understands anything let alone the Chinese language. :) This also helps us understand how wild and out in the weeds the CRA claim is, since as far as I'm aware we have no reason or even hint to imagine that there is some kind of computer more powerful than a Turing-equivalent machine, yet that is what Searle claims the brain is. To the question at hand, while I'm not aware of Searle ever mentioning Turing machines by name, I had always interpreted the language that he does use—eg "formal symbol manipulation" etc—as referring to "what computers do" ie Turing machines. That is after all the context in which Searle is writing, isn't it? I agree it would be nice to have a source in the article to back this up if there is one, but is there really much question about this? I'm not so sure the Ben-Yami paper is really saying what we're saying it's saying; it doesn't even mention Turing machines, for example. ErikHaugen (talk &#124; contribs) 17:46, 4 December 2012 (UTC)


 * My apologies ErikHaugen. Please read my remarks with the word "machine" deleted.  As far as I can determine, Searle's experiment is unaffected by the details of the situation.  The point at hand is that a computer program can, in theory, be executed by a human being.  LookingGlass (talk) 12:11, 6 December 2012 (UTC)


 * The point is relevant to the "replies" which this article categorizes as "redesigning the room". There are many such criticisms. Searle argues that any argument which requires Searle to change the room is actually a refutation of strong AI, for exactly the reason you state: it should be obvious that anyone can execute any program by hand, even a program which Strong AI claims is "conscious" or "sentient" or whatever. If, for some reason, you can't execute the program by hand and get consciousness, then computation is not sufficient for consciousness, therefor strong AI is false.


 * "Turing completeness" is a way to say this that makes sense to people who are trained in computer science. CharlesGillingham (talk) 06:35, 8 January 2013 (UTC)


 * Many thanks Charles. Searle's argument seems beautifully elegant to me. LookingGlass (talk) 20:56, 8 January 2013 (UTC)

"The Real Thing"
I removed a reference to "The Real Thing" in the "Strong AI vs. AI research" and replaced it with "human like cognition". The original phrase was intended to refer to cognition, but in the context of the sentence could be easily misconstrued to refer to intelligence. I felt the distinction was worthy of clarification because in Searles' conception, computers do in fact have "real" intelligence under a variety of understandings of the term but lack the capacity for awareness of the use of that intelligence. Jaydubya93 (talk) 13:44, 16 January 2014 (UTC)


 * I agree with you, however, I think "a simulation of human cognition" could be construed as "human-like cognition" (as opposed to, say, Rodney Brook's "bug-like cognition"). In this reading, Searle's argument presumes that machines with "human-like cognition" are possible, so the sentence as you changed it doesn't quite work either. I changed it to explicitly mention "mind" and "consciousness" (i.e. "awareness", as you said above) so that the issue is clearer. CharlesGillingham (talk) 19:59, 18 January 2014 (UTC)


 * No. Searle's argument does not presumes "...that machines with *human-like cognition* are possible" ; it presumes that "that machines with "human-like cognition" WILL BE possible", if the scientific mentality changes. Luizpuodzius (talk) 19:33, 24 December 2014 (UTC)

Odd placement of citations and notes
Why does this article have citations at the beginning of paragraphs instead of the ends. Is this some WP innovation? Myrvin (talk) 06:46, 6 July 2015 (UTC)


 * The footnotes you're referring to are actually supposed to be attached to the bold title, but changes in formatting over the years have moved them down a line and to the front of the paragraph. I should move them after the first sentence. CharlesGillingham (talk) 04:52, 10 September 2015 (UTC)


 * CharlesGillingham (talk) 05:05, 10 September 2015 (UTC)

Where I agree and disagree with Searle
 Looie496 (talk) 12:35, 22 September 2015 (UTC)

What am I missing?
 Luizpuodzius (talk) 23:23, 23 September 2015 (UTC)

Richard Yee source
(ref 51, 59, 60) Department of Computer Science, University of Massachusetts at Amherst. His paper is published in Lyceum, which seems to be a free online journal published by the Saint Anselm Philosophy Club. Can't find much on him, apart from a few papers presented at workshops ("Machine learning: Proceedings of the Eighth International Workshop (ML91)"), "Abstraction in Control Learning".

Is there any reason to believe this person and his opinion are notable? His argument seems to be superfluous, imo, it doesn't add anything. We know that the instructions (the "rules table") are the program. The room has only one set of instructions: for Chinese. There's no talk about the room doing other languages ("changing the program"). In all aspects it is a general turing machine doing one specific task. Yet Yee brings up external programs and universal turing machines, only to say they are a red herring and "philosophical discussions about "computers" should focus on general Turing computability without the distraction of universal programmability.". A distraction he himself introduced.

We're not supposed to interpret sources, but it makes me wonder whether his paper/opinion is really more notable than for example a random blog entry or forum post discussing the subject. The point he makes would already be obvious to the reader: the rules table, not the person, is the program. Yet this is once again repeated in the systems reply section with an example, concluding: Yet, now we know two things: (1) the computation of integer addition is actually occurring in the room and (2) the person is not the primary thing responsible for it. This was first added here. Is it really necessary to explain it at such length? Ssscienccce (talk) 02:43, 7 October 2015 (UTC)


 * I agree. It's the naive "system" reply, with a little formal window dressing.


 * I think he's kind of missed the point about the room being a Turing machine -- the point is, digital computers can never be "more sentient" than the room-with-Searle, they can only be faster. The Chinese room is as sentient as you ever get. CharlesGillingham (talk) 05:26, 14 December 2015 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 1 one external link on Chinese room. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Corrected formatting/usage for http://www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.—cyberbot II  Talk to my owner :Online 22:48, 26 May 2016 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 1 one external link on Chinese room. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20010221025515/http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html to http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.— InternetArchiveBot  (Report bug) 15:01, 22 November 2016 (UTC)

Regarding the reply section
Why is the replies section not labeled as criticisms instead? I'm curious as it's not something I'm used to seeing on Wikipedia, as many of these "replies" are criticisms/arguments. Wiremash (talk) 11:06, 28 November 2016 (UTC)
 * Because Searle's famous paper in Behavioral and Brain Sciences referred to them as "replies". In this respect he was more or less following the structure of Turing's famous paper Computing Machinery and Intelligence, which proposed the Turing test -- although Turing used the term "objections". Looie496 (talk) 15:54, 28 November 2016 (UTC)

Interesting, seems like a clever choice of words to imply neutrality Wiremash (talk) 16:07, 29 November 2016 (UTC)

Need to say that some people think that Searle is saying there are limits to how intelligently computers can behave
Similarly, some people also lump Searle in with Dreyfus, Penrose and others who have said that there are limits to what AI can achieve. This also will require some research, because Searle is rarely crystal clear about this. This belongs in a footnote to the section Strong AI vs. AI research. CharlesGillingham (talk) 21:37, 10 February 2011 (UTC)
 * He seems clear enough to me: he doesn't claim that there are limits on computer behavior, only that there are limits on what can be inferred from that behavior. Looie496 (talk) 23:24, 5 April 2011 (UTC)


 * Yes, I think so too, but I have a strong feeling that there are some people who have written entire papers that were motivated by the assumption that Searle was saying that AI would never succeed in creating "human level intelligence". I think these papers are misguided, as I take it you do. Nevertheless, I think they exist, so we might want to mention them. CharlesGillingham (talk) 08:19, 6 April 2011 (UTC)
 * Is this the same as asking if computers can understand, or that there are limits to their understanding? What does it mean to limit intelligence, or intelligent behaviour? Myrvin (talk) 10:16, 6 April 2011 (UTC)
 * There is eg. this paraphrase of Searle: "Adding a few lines of code cannot give intelligence to an unintelliget system. Therefore, we cannot hope to program a computer to exhibit understanding." Arbib & Hesse, The construction of reality p. 29. Myrvin (talk) 13:19, 6 April 2011 (UTC)


 * I think that, even in this quote, Searle still holds that there is a distinction between "real" intelligence and "simulated" intelligence. He accepts that "simulated" intelligence is possible. So the article always needs to make a clear distinction between intelligent behavior (which Searle thinks is possible) and "real" intelligence and understanding (which he does not think is possible).


 * The article covers this interpretation. The source is Russell and Norvig, the leading AI textbook.


 * What the article doesn't have is a source that disagrees with this interpretation: i.e. a source that thinks that Searle is saying there are limits to how much simulated intelligent behavior that a machine can demonstrate. I don't have this source, but I'm pretty sure it exists somewhere. CharlesGillingham (talk) 17:32, 6 April 2011 (UTC)


 * Oops! I responded thinking that the quote came from Searle. Sorry if that was confusing. Perhaps Arbib & Hesse are the source I was looking for. Do they believe that Searle is saying there are limits to how intelligent a machine can behave? CharlesGillingham (talk) 07:34, 7 April 2011 (UTC)
 * See what you think CG. It's in Google books at: . Myrvin (talk) 08:27, 7 April 2011 (UTC)


 * Reading that quote one more time, I think that A&H do disagree with the article. They say (Searle says) a computer can't "exhibit understanding". Russell and Norvig disagree (I think). They say (Searle says) even if a computer can "exhibit" understanding, this doesn't mean that it actually understands.


 * With this issue, it's really difficult to tell the difference between these two positions from out-of-context quotes. If the writer isn't fully cognizant of the issue, they will tend to write sentences that can be read either way. CharlesGillingham (talk) 19:27, 12 April 2011 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Chinese room. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20121114093932/https://mywebspace.wisc.edu/lshapiro/web/Phil554_files/SEARLE-BDC.HTM to https://mywebspace.wisc.edu/lshapiro/web/Phil554_files/SEARLE-BDC.HTM

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 19:51, 11 January 2018 (UTC)

neural computator
If I could produce a calculator made entirely out of human neurons (and maybe some light emitting cells to form a display), can I then finally prove humans are not intelligent :P ? Such a biological machine would clearly possess only procedural capabilities and have a formal syntactic program. That would finally explain why most people are A) not self-aware and B) not capable.

You people do realize that emergent behavior is not strictly dependent on the material composition of its components, but rather emerge from the complex network of interactions that emerge said components? Essentially the entire discussion is nothing more than a straw man. — Preceding unsigned comment added by 195.26.3.225 (talk) 13:23, 30 March 2016 (UTC)


 * Exactly!
 * Reducing to the absurd in another way, Searle's argument is like extending a neuron's lack of understanding to the whole brain. 213.149.61.141 (talk) 23:48, 27 January 2017 (UTC)


 * But, to respond to Searle, you have to explain exactly how this "emergent" mind "emerges". You rightly point out that there is no contradiction, but Searle's argument is not a reductio-ad-absurdum. The argument is a challenge: what aspect of the system creates a conscious "mind"? Searle says there isn't any. You can't reply with the circular argument that assumes "consciousness" can "emerge" from a system described by program on a piece of paper. CharlesGillingham (talk) 21:58, 18 May 2018 (UTC)


 * We don't know what aspect of the system creates a conscious mind. This is true whether the system we are talking about is some hypothetical chinese room, or just a normal human brain. I'm not sure how Searle can claim confidently that there isn't anything in the system that is conscious. Sure it's unintuitive that a conscious system made of paper can arise, but individual neurons aren't any more conscious than slips of paper, and collections of neurons nevertheless manage to become a conscious system somehow. What's so special about them? Given that the paper and the neurons are processing the exact same information, why is it so implausible that consciousness can emerge from the paper in the exact same way as it can from the neurons?


 * This problem won't be satisfactorily resolved until we can finally answer the challenge, as you say, but the argument doesn't really get you anywhere. If you don't already believe neurons are special and are the only things that can possibly generate consciousness, the argument won't sway you in the least because the argument relies on that assumption.


 * It is ironic that Searle claims his detractors to be arguing circularly on the basis that their objections to the argument only work if we assume substrate-independant consciousness exists, when his argument also only works if we assume it does not. It's barely even an argument, more a convoluted statement of his own opinion. It's like saying something unsubstantiated, and then when someone asks you to back it up, claiming that they are arguing circularly because their asking you to back up your argument doesn't itself prove you wrong. — Preceding unsigned comment added by Mrperson59 (talk • contribs) 06:06, 14 January 2021 (UTC)