Talk:Turing test

Voight Kampff
does anyone have an objection to having a link to Voight-Kampff machine in the see alsos? It is the test from Blade Runner to test for replicants. WookMuff 20:59, 9 March 2006 (UTC)


 * I, for one, have no objection. Rick Norwood 21:11, 13 March 2006 (UTC)


 * I also have no objection, and I think it would be interesting to include something like a "references in pop culture" type section. If I recall correctly, didn't an episode of the Simpsons spoof Turing or allude to the test? - IstvanWolf 23:20, 10 May 2006 (UTC)


 * I also have no objection, and would like to add another pop-culture reference. Scott Adams' Dilbert mentioned it in his 16 March 2009 strip. http://www.dilbert.com/2009-03-16. The previous day's strip features the Pointy-Haired Boss sticking his head in a hold in the ground.  I do not know if this is close enough to the allusion to the "heads in the sand" mentioned in other comments. Mstefaniak (talk) 12:58, 16 March 2009 (UTC)

Almost perfect!
Excellent work by User:Bilby to make this into a great article. The only problem I see with it now has to do with overall structure and consistency. The older sections need to brought up to the same standard as the sections by Bilby, and some of the older material needs by tossed or integrated into the newer sections. "Weaknesses of the test" should probably acknowledge in some way that this material has been partially discussed above. "Predictions and tests" should probably be integrated into the "History" section above in some abbreviated form. "Variations on the test" should be brought up to the same standard set by "Versions of the test," and so on. "Practical applications" (IMHO) should be tossed. This kind of work would bring the article up to FA status in no time.

Also, I wonder if User:Bilby would be interested in improving Computing Machinery and Intelligence? It just needs a page or two. 19:01, 25 April 2008 (UTC)


 * Thanks for the kind words. I haven't finished here yet - I was thinking that both strengths and weaknesses warrant further expansion, as you suggest, and that something on whether or not the Turing test constitutes an operational definition of knowledge would be nice. I've been letting it sit for a bit so that other editors could fix my mistakes - which they've been doing. :) I've also recently come across some literature using the Turing test in odd ways outside of AI, so I'm curious as to whether or not it would be applicable. If anyone is interested, the article concerned, Gaming and Simulating EthnoPolitical Conflicts, uses what it describes as a Turing test between actions of people roleplaying and the actions of those involved in the actual events, but I'm not sure where, or even if, it fits in here. - Bilby (talk) 00:27, 26 April 2008 (UTC)


 * I combined all the material on the Loebner prize into one section, and the same for material about how its "impractical". It could still use some rewriting to make it all flow perfectly. CharlesGillingham (talk) 05:38, 2 November 2008 (UTC)


 * Okay, I've finally finished integrating all the material in the history section with the material in "weaknesses" section. I think that, with some copyediting and a couple more references we have a featured article here. CharlesGillingham (talk) 00:02, 22 March 2009 (UTC)

Restricted vs Unrestricted test
The article appears to fail to define the difference between the "restricted" and the "unrestricted" tests (at least I couldn't find it defined.) WilliamKF (talk) 19:37, 11 June 2008 (UTC)

Bot report : Found duplicate references !
In the last revision I edited, I found duplicate named references, i.e. references sharing the same name, but not having the same content. Please check them, as I am not able to fix them automatically :) DumZiBoT (talk) 09:44, 8 August 2008 (UTC)
 * "RussellNorvig3" :
 * since there are easier ways to test their programs: by giving them a task directly, rather than through the roundabout method of first posing a question in a chat room populated with machines and people. Alan Turing never intended his test to be used as a real, day-to-day measure of the intelligence of AI programs. He wanted to provide a clear and understandable example to help us discuss the philosophy of artificial intelligence. under The Imitation Game, where he writes "Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."
 * since there are easier ways to test their programs: by giving them a task directly, rather than through the roundabout method of first posing a question in a chat room populated with machines and people. Alan Turing never intended his test to be used as a real, day-to-day measure of the intelligence of AI programs. He wanted to provide a clear and understandable example to help us discuss the philosophy of artificial intelligence. under The Imitation Game, where he writes "Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."


 * These footnotes don't need to be combined. CharlesGillingham (talk) 08:36, 18 March 2009 (UTC)

The very first sentence
The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. I think the very first sentence is a bit misleading. The Turing test is not about machine's ability to demonstrate intelligence but about machine's ability to think. Thinking is not the same as intelligence. Turing asked "Can machines think?".

This is my first time. It didn't hurt. Kuokkanen (talk) 20:19, 25 August 2008 (UTC)


 * Well, technically, he replaced this question, because he felt the question "can machines think?" was unanswerable. Most people understand the test as being about artificial intelligence, and so I "intelligence" is appropriate. His new question (to paraphrase) is "can a machine behave intelligently?" CharlesGillingham (talk) 05:41, 2 November 2008 (UTC)

intelligence tests in general
Is there a Wikipedia article that discusses intelligence tests for computers in general? Or even more general, intelligence tests for computers and other possibly intelligent creatures? Currently, intelligence test redirects to an article that, as far as I can tell, discuss tests that only apply to English-speaking humans. --68.0.124.33 (talk) 03:32, 8 January 2009 (UTC)

mention of item
"Turing test" was mentioned tioday in a Dilbert cartoon. it would be interesting to see if article traffic increases at all. thanks. --Steve, Sm8900 (talk) 14:46, 16 March 2009 (UTC)


 * whoa, did it ever. take a look at this page. --Steve, Sm8900 (talk) 15:34, 17 March 2009 (UTC)


 * Well why do you think I am here? —Preceding unsigned comment added by 192.229.17.103 (talk) 10:39, 10 February 2010 (UTC)

A new definition of intelligence is needed so I shall provide it and show how it is useful:

Intelligence: acquisition, combination, breakdown and refinement of strategy.

A strategy specifies goals, their desirability and, at what likelihoods to take what actions on what (set of) conditions.

Strategizing is Devising a set of strategies.

Devising strategies consists of sub processes of creating and assessing conditions for actions, weight of goals, estimates of costs of actions, estimates of effectiveness of actions, finding related strategies, taking strategies apart, comparing strategies, combining known strategies, covering contingencies , evaluating strategies.

To begin these definitions when applied to the Turing test provide useful results. It is not in the ability to execute or perceive strategy as with the test of the paper machine but in the coming up with a strategy that we find intelligence. A interesting test would be the ability to play a new games that have not been played before and later making games can be used to teach strategy for a computer player designed to learn new strategy. Game theory explores the play of arbitrary games.- sm4096@gmail.com — Preceding unsigned comment added by Sm4096-Stas (talk • contribs) 03:17, 28 January 2012 (UTC)

Teaching computers to lie?
There is a line in this article that says teaching computers to lie is widely regarded as a bad idea. But since it's pretty obvious to anybody who knows anything about computers that they can't be 'taught' to do anything they can only 'be told' to do something, wouldn't that making teaching a computer widely regarded as impossible?

Think of it this way:
 * Asking whether or not a computer can think is like asking whether or not a submarine can swim.
 * Therefore: Computers cannot think
 * Therefore: Computers cannot learn
 * Therefore: Computers cannot be taught
 * Q.E.D.

Am I missing something here? —Preceding unsigned comment added by 167.6.245.98 (talk) 19:38, 17 March 2009 (UTC)
 * Logical proofs cannot be begun with unsupported simile. dissecting your argument, it is "computers cannot think because computers cannot think." which is no argument at all.
 * Details on the ability for computer programs to be taught, or to learn is more appropriate to artificial intelligence and related articles. But the short answer is, yes, some programs are taught, and not just programmed. Likewise, certain programs can "learn", when given specific stimuli.  Naturally, looking into the issue in greater detail gets into a mess of semantic and philosophical arguments that are totally beyond the scope of this discussion page.
 * As far as teaching computers to lie being widely regarded as a bad idea; that statement would require quite a bit of support for me to accept it. -Verdatum (talk) 17:59, 21 April 2009 (UTC)
 * On review, I removed the "widely regarded as a bad idea" claim. The reference provided to support the claim was merely a synopsis of a course focused on computer security.  And it is incorrect anyway.  Not requiring the software to lie for compression tests is advantageous because it's yet another aspect of artificial intelligence that would need to be conquered to pass.  Not because lying is a no-no. -Verdatum (talk) 19:27, 21 April 2009 (UTC)

Archive 2
I have archived all the sections on this talk page that referred to sections that no longer exist or to issues that have been resolved. These old comments can be found at Talk:Turing test/Archive 2, or by clicking on the link in the small "archive" box at the top of this page. CharlesGillingham (talk) 01:27, 18 March 2009 (UTC)

Mind paper Metric
Turing's paper only specified "109". "Bit" was never used by Turing in his paper. It would have made metrics and questions easier had he. 143.232.210.38 (talk) 23:17, 24 June 2009 (UTC)

Question about a change
The edit comment on this change says "Turing actually didn't say this." However, this was a quote from Turing (1950). (It's at the end of section five, when he's giving his final version of the question.) I think it identified his precise question very well. The difference is only a slight technicality: Turing wants to narrow the discussion to digital machines, rather than machines in general.

This is a subtle difference, but one that has serious philosophical implications. Even John Searle would agree that "machines can have intelligence and consciousness" because, as he writes, "we are machines ourselves." (Searle 1980) However, Searle would not agree that digital machines can have "real" intelligence and consciousness. So Turing's final question is the one we actually want to answer.

I may change this back to the quote, unless there is some objection. CharlesGillingham (talk) 17:58, 19 January 2010 (UTC)


 * You're too polite. I didn't realize it was a quote from Turing, and changed it for that reason, so I'm all for reverting that, and adding the page number to make it clear it is an actual quote (section 5, p. 442). Thanks. Rp (talk) 09:27, 20 January 2010 (UTC)


 * (By the way, I believe you're mistaken about Searle's position (I once did a student paper on that article): his whole point is that answering Turing's replacement question in the affirmative won't tell us anything about whether computers are "really" intelligent (which, if you ask me, is little more than the standard knee-jerk dismissal of behaviorism). Also, I was confused by your use of the term "digital" to refer to Searle's argument that it is not the digital computational process that produces intelligence.) Rp (talk) 09:27, 20 January 2010 (UTC)


 * It is a rejection of behaviorism, certainly, but I think he goes a bit further than that. Searle argues that a digital computer can, at best, produce a only a simulation of intelligence, like a simulation of the weather. It may be reproducing the same inputs and outputs, but it isn't using the same process. He thinks you need "actual physico-chemical properties of actual physical brains". In other words, Searle believes the hardware matters. You can't do it by digital simulation, unless you simulate details of the specific chemistry going on inside the brain.


 * Searle's argument only applies to digital machines, that is, Turing complete devices that can simulate any computable function, by virtue of having the right program. Machines that aren't digital, such as the brain, can have intentionality and consciousness, according to Searle. --- CharlesGillingham (talk) 11:12, 20 January 2010 (UTC)


 * Yes, and when asked to explain what it means, Searle merely adds: but surely we all take this for granted! In other words, it is a pretty vacuous statement. Rp (talk) 16:43, 12 February 2010 (UTC)


 * My own problem with this paragraph and the preceding one is a different one and I'm not sure how to resolve it - namely, the difference is glossed over between Turing's actual test (in which a machine replaces a man in the imitation game, in which the objective is to tell which of the two participants is the woman) and how the test is often interpreted (namely, the objective of the game becomes to tell which of the participants is the human being). Rp (talk) 09:27, 20 January 2010 (UTC)


 * The first part is intended to describe Turing's version of the test, so it doesn't draw a distinction between Turing's description of the test and the ones that are later employed. However, in Versions of the Turing Test, the differences are explored. The problem is that there's a huge mess in the literature between the various types. I don't have a hassle with highlighting the distinction earlier, but I saw it more of a matter of "This is what Turing said", followed later with "This is how the test is described now". :) - Bilby (talk) 09:58, 20 January 2010 (UTC)


 * My problem is that the present introduction doesn't clearly describe what Turing said: it doesn't explain the Imitation Game. Perhaps it can be summarized in a single sentence? I've tried but failed. Rp (talk) 16:43, 12 February 2010 (UTC)

Apologies - but: contrary to Turing's original proposal
The text says  Isn't "proposal" too vague a word? If Turing was posing a question, then Searle's answer is not 'contrary' to a question. If Turning was proposing an answer, then it is not a proposal, but an assertion. Do you mean it was a tentative answer? Perhaps a "straw man"? I am confident this has been hashed over before, but I think the result is not "there" yet. IMO ( Martin | talk • contribs 01:58, 14 July 2010 (UTC))
 * I fully agree. The opening paragraph is just as confusing in its use of the term 'proposal'.  It isn't clear what proposal refers to exactly; certainly not the assertion that machines can think, because Turing carefully avoids the question of what it means for someone or something to think.  (Turing's paper is more intelligent than some people think.) Rp (talk) 16:33, 15 July 2010 (UTC)


 * I agree. Turing's exact words are "I propose to consider a question ... " which is, of course, another thing entirely. Let's fix these two sentences. --- CharlesGillingham (talk) 17:43, 15 July 2010 (UTC)


 * Fixed, I think. CharlesGillingham (talk) 17:56, 15 July 2010 (UTC)

Removed material
I removed this, because it lacks both an attribution and a citation, but mostly because it is a vague oversimplification of a complex subject. Critics of this experiment argue that if there's no way to distinguish between a human and a symbol manipulating machine, than Searle's definition of "thinking" verges on the metaphysical with no quantitative value. There should probably should be a sentence that indicates that most people think Searle is wrong. However, there are many different arguments against Searle, and the snippet above doesn't really capture the central issues of the debate, at least in my view. CharlesGillingham (talk) 17:56, 15 July 2010 (UTC)

Suzette
Apparently this chat bot has beaten the Turing Test, the article is officially outdated by saying that none have passed the test. —Preceding unsigned comment added by 142.232.133.132 (talk) 14:57, 29 October 2010 (UTC)

Suzette Klimov (talk) 15:49, 27 October 2010 (UTC)
 * I have commented the sentence out with an explanation. Rp (talk) 16:55, 15 November 2010 (UTC)

Most would agree that a "true" Turing test has not been passed (...)
This remark should be removed as soon as possible - it utterly misses the point of the Turing test. 09:07, 19 November 2010 (UTC)
 * I have removed it. Rp (talk) 21:47, 1 January 2011 (UTC)

Seemingly unnecessary section
http://en.wikipedia.org/wiki/Turing_test#The_Alan_Turing_Year.2C_and_Turing100_in_2012

This section really doesn't have anything to do with the Turing test. If anything, this should be moved to Alan Turing, under the section Recognition and tributes. One More Fool (talk) 16:24, 25 January 2011 (UTC)


 * I agree. In fact, much of the material in the second half of the History section reads like a press release --- it seems to be more about the events and not really about the test itself. Most of it should be farmed out to articles on these events. CharlesGillingham (talk) 01:28, 14 February 2011 (UTC)

This Month's Atlantic magazine cover story
is about the Turing test. I'm sure there are some good quotes here. CharlesGillingham (talk) 01:26, 14 February 2011 (UTC)

Cleanup tag
The cleanup tag at the top of the article refers to mostly to the WP:CITEs that are used as citations and links. To quote from WP:Citing sources: Embedded links to external websites should not be used as a form of inline citation, because they are highly susceptible to linkrot. Wikipedia allowed this in its early years—for example by adding a link after a sentence, like this, which looks like this. This is no longer recommended. Raw links are not recommended in lieu of properly written out citations, even if placed between ref tags, like this.

Embedded links should never be used to place external links in the body of an article, like this: "Apple, Inc. announced their latest product..."

So please, if you have a minute, change some of these links into "full citations" like the others. Thanks. CharlesGillingham (talk) 20:06, 25 April 2011 (UTC)


 * ✅. With this fixed, and the other recent changes, I think that we can remove the tag. CharlesGillingham (talk) 18:55, 3 May 2011 (UTC)

Two major changes I'd like to see
I have two problems with the article's overall organization. Please let me know if you disagree. I will make the change in a week or so if no one objects. CharlesGillingham (talk) 20:14, 25 April 2011 (UTC)
 * 1) The following sections come too early in the article: Turing Colloquium, 2005 Colloquium, AISB 2008 Symposium, The Alan Turing Year. These are simply not as important to the general reader as other material and the are not an essential part of the history of the test. I think they are not in the right section. I would pull these four sections and move them down into their own section towards the bottom of the article. (What should the title be?)
 * 2) I think that Strengths and Weaknesses should precede Versions, because I think that these sections are of more interest to the general reader. Versions is more complicated and is for the advanced reader who really wants to get the details right.

My proposal for the structure of the article: Again, let me know if you don't like this idea. CharlesGillingham (talk) 20:29, 25 April 2011 (UTC)
 * 1) History
 * 2) Philosophical background
 * 3) Alan Turing
 * 4) ELIZA and PARRY
 * 5) The Chinese room
 * 6) Loebner Prize
 * 7) Strengths of the test
 * 8) Tractability and simplicity
 * 9) Breadth of subject matter
 * 10) Weaknesses of the test
 * 11) Human intelligence vs intelligence in general
 * 12) Real intelligence vs simulated intelligence
 * 13) Naivete of interrogators and the anthropomorphic fallacy
 * 14) Impracticality and irrelevance: the Turing test and AI research
 * 15) Versions of the test
 * 16) The Imitation Game
 * 17) The standard interpretation
 * 18) Imitation Game vs. Standard Turing Test
 * 19) Should the interrogator know about the computer?
 * 20) Variations of the test
 * 21) etc
 * 22) Predictions
 * 23) Colloquia, Symposia and Conferences
 * 24) Turing Colloquium
 * 25) 2005 Colloquium on Conversational Systems
 * 26) AISB 2008 Symposium on the Turing Test
 * 27) The Alan Turing Year, and Turing100 in 2012
 * 1) AISB 2008 Symposium on the Turing Test
 * 2) The Alan Turing Year, and Turing100 in 2012
 * I support this proposal except for one detail, that The Imitation Game -or at least a detailed summary of it- should be placed first, earlier even than the History section. IMO the other sections (history, philosophical background, strengths and weaknesses) don't make sense until the reader actually knows HOW the test is supposed to be performed. All the other changes are fine for me. Diego Moya (talk) 11:48, 26 April 2011 (UTC)


 * Do you mean the game involving a man and a woman and an interrogator (i.e., no machines)?
 * @Diego. I've restored the introduction to an earlier version which doesn't mention the "Imitation Game" before it has been defined. I hope this addressed your concern. CharlesGillingham (talk) 18:56, 3 May 2011 (UTC)

✅. I have carried out this reorganization. CharlesGillingham (talk) 18:56, 3 May 2011 (UTC)


 * I think you misunderstood my concern. It was not that the words "Imitation game" were used before defining them but that the History, Strengths and Weaknesses all make more sense after the reader knows the basic mechanics of the test, so it makes sense placing a quick-to-read but detailed description of the test before them. Diego Moya (talk) 12:54, 4 May 2011 (UTC)
 * P.S. I find the version where "Versions of the Turing test" is placed before "Strength" superior, I've changed their positions. Diego Moya (talk) 13:08, 4 May 2011 (UTC)


 * Well, we disagree here. I think the basic mechanics of the Turing Test are described in the first paragraph of the article. (If not, they certainly should be!)


 * I think that the first few paragraphs of the article must be written for an audience that is unfamiliar with the test and Turing's paper. For this reason, I feel pretty strongly that the second paragraph should discuss the Turing Test, rather than bringing up some other as-yet-undefined thing called the "Imitation Game". The distinction between the "Imitation Game" and (the "standard interpretation" of) the Turing Test is a can of worms that the article doesn't need to open until later. It certainly doesn't make sense to me to just, off hand, throw in an undefined term in an otherwise comprehensible paragraph.


 * I also argue that the "Versions" section does not help you to understand "the basic mechanics of the test"; on the contrary, it muddies the water. The sources don't agree and the terminology varies from source to source. To me, it reads like an academic quibble, which is why I moved it down to page five or so, an appropriate depth for readers who are interested in these subtle issues.  CharlesGillingham (talk) 20:33, 4 May 2011 (UTC)


 * So here are my constructive questions:
 * If you feel that the first paragraph doesn't describe "the basic mechanics of the test", would you help me to rewrite it so that it does?
 * Should we remove the quote from Turing in the second paragraph just make the point directly, so that we don't have to mention "Imitation Game"? I don't like throwing in an undefined term off hand.
 * Thoughts? CharlesGillingham (talk) 20:38, 4 May 2011 (UTC)

Confusion
I find the versions of the Turing test section a tad confusing and overall not well written. The images don't help either as it's not clear what they relate to. I know, but I think someone coming to this article with the intent of learning more will end up being confused. It's not clear that there should be two tests run (male/female and computer/female) and it's not clear that both images relate to the same test. And no, I don't want to sign this. —Preceding unsigned comment added by 174.3.136.241 (talk • contribs) 22:26, 23 May 2011


 * I agree, see my thoughts in the previous section. CharlesGillingham (talk) 23:28, 27 October 2011 (UTC)

More confusion
The article text states: ''The Turing test is based on the subjective opinion of the interrogator and what constitutes a humanlike response to their question. This assumes that human beings can judge a machine's intelligence by comparing its behaviour with human behaviour.''

This misses the point of the Turing test. It assumes no such thing. Only the assumption that it provides a way to assess intelligence does; that assumption can only be made if we can agree what intelligence is, and the whole point of the test is to sidestep that issue. Rp (talk) 18:09, 6 September 2011 (UTC)


 * Technically, you might be right. This is a criticism of this statement: "Any machine that passes the Turing test is intelligent." This is not a statement that Turing actually made, to my knowledge. Nevertheless many people hold this position. These critics are arguing against them. They say, to the contrary, that the Turing Test is not, in fact, a test of intelligence.


 * I suppose we could clarify this a little. That first sentence you quoted is very clunky and should probably be fixed. The second is pretty readable and clear, however, you could argue that it oversimplifies the issues. CharlesGillingham (talk) 07:10, 7 September 2011 (UTC)


 * I made a little fix to the clunky sentence. CharlesGillingham (talk) 07:20, 7 September 2011 (UTC)

The Weaknesses section
The section currently starts with the line, The Turing test can be used as a measure of a machine's ability to think only if one assumes that an interrogator can determine if a machine is thinking by comparing its behaviour with human behaviour. That's a serious misrepresentation of Turing's argument, since he explicitly disavowed that his test was a measure of thinking -- he regarded the term thinking as too vague to be operationalized. It would be more accurate to say that the test was intended as a surrogate for a machine's ability to think. Once this correction is made, the second part of the sentence no longer follows. (I'm not criticizing the recent edit, by the way -- the previous version was even farther off the mark.) Looie496 (talk) 22:45, 27 October 2011 (UTC)


 * Yes, I see your point, but I'm not sure how to fix it. The Turing test is commonly assumed to be a test of intelligence. I actually agree that this is not an assumption that Turing explicitly makes. But it is the assumption that is being criticized.


 * I've written that first paragraph so that it doesn't mention Turing, or Turing's nuanced position. It states the position being criticized, saying (basically) "If you want to use the Turing test as a test for intelligence (or "sentience" and the like) you're going to run into these problems." These are not criticism of Turing, they're criticisms of the position "The Turing test can be used as a measure of intelligence."


 * If you can think of a way to phrase without making it sound like we're equivocating or splitting hairs, feel free. But this paragraph is just intended to summarize what follows, and I like phrasing in terms of the assumption, because it breaks the problem up nicely. CharlesGillingham (talk) 23:14, 27 October 2011 (UTC)


 * I added a paragraph that points out that Turing didn't hold the position being criticized. Does this work for you? CharlesGillingham (talk) 23:22, 27 October 2011 (UTC)

Reference: Robots Pass Musical Turing Test
This cite might fit in well:  ★NealMcB★ (talk) 15:27, 1 August 2013 (UTC)

Cleverbot
Should there be a reference to Cleverbot since it comes pretty close to passing the Turring test? BenW (talk) 09:00, 3 October 2013 (UTC)
 * I just tried it:
 * Me: How do I use this interface?
 * Cleverbot: What do you want to use?
 * Me: I want to know how to get interesting responses from you.
 * Cleverbot: You need lips.
 * It seems to leave something to be desired. Looie496 (talk) 15:43, 3 October 2013 (UTC)

Predictions error
Under predictions it states the following: "...machines with 10 GB of storage would be able to fool 30% of human judges in a five-minute test..." I have just read the original article where it states "I believe that in about fifty years'time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent." 10^9 is equal to 1 Gigabit, or 125 Megabytes. Could someone please confirm? — Preceding unsigned comment added by 77.109.122.127 (talk) 19:36, 12 October 2013 (UTC)

I agree with your reading of the cited paper - the Turing paper talks about the number of "states" and says that the "logarithm to the base two of the number of states is usually called the 'storage capacity' of the machine" (p441, my emphasis). This gives storage capacity in bits - which is much closer to 125MB than 10GB! --Drpixie (talk) 02:09, 9 June 2014 (UTC)

Note to Carrite
Every noun can be verbed. Rick Norwood (talk) 20:18, 2 March 2015 (UTC)
 * But it does weird language. Rp (talk) 09:09, 4 March 2015 (UTC)

Standardize capitalization of Turing test
Throughout the article, the word "test", when preceded by "Turing", alternates from capitalized to not capitalized multiple times. I think a standard should be established ("Turing test" vs. "Turing Test"), and all instances of the phrase should be changed to reflect the standard.

Pigi5 (talk) 01:20, 27 March 2015 (UTC)

Lack of attribution of this article in a self-published book?
Sorry to use such a possibly inflammatory heading -- this is my first edit of a Talk page.

I noticed that two paragraphs from the section on PARRY are duplicated in a 2014 book called "The Digital Mind" on p. 132 (according to this Google link: https://books.google.com/books?id=K2erBgAAQBAJ&pg=PA132&lpg=PA132&dq=CyberLover%22,+a+malware+program&source=bl&ots=NQh_tdlDvw&sig=H-GzHqw3OmjOJ-etf1mSIgk2jAk&hl=en&sa=X&ei=PWE4VZyUC5fSoATIroGgAg&ved=0CDAQ6AEwAg#v=onepage&q=CyberLover%22%2C%20a%20malware%20program&f=false).

The book appears to be self-published and may not have had the benefit of an editor.

I wasn't sure if this was an issue that is actually an issue or if it has already been addressed, and please forgive me for my newbie's handling of this.

I'm not the author of the book (and am not a machine pretending to not be the author of the book) and found this simply by Googling "CyberLover, a malware program" (without the quotation marks) and selecting the first non-ad hit.

Thanks for your consideration of this.

(Waiting on username to come in....)

205.178.57.16 (talk) 04:01, 23 April 2015 (UTC)


 * That text has been in the Wikipedia article since 2011, so as you say it is much more likely that the book copied from Wikipedia than vice versa. Unfortunately that sort of thing is more and more common nowadays.  Given the low level of interest that self-published books generally draw, it probably isn't worth taking any action. Looie496 (talk) 13:47, 23 April 2015 (UTC)

Confusing sentence
"The fundamental issue with the standard interpretation is that the interrogator cannot differentiate which responder is human, and which is machine."

I *think* what is meant is "The fundamental QUESTION in the standard interpretation is whether or not the interrogator can the human from the computer."

But I really am not sure: in any case, as written, it doesn't make much sense. GeneCallahan (talk) 02:13, 29 March 2016 (UTC)

Missing citation
The end of the introduction paragraph is missing a citation where it says "Turing originally suggested that the machine would convince a human 70% of the time after five minutes of conversation."

Additionally, I found it tedious that the citations link to notes, which then bring you to the reference list. It seems to me it should directly link to the reference. Thoughts? ––Madisynkeri (talk) 17:11, 3 October 2016 (UTC)
 * I've cited it with a cite from the body. Regarding your second point, Wikipedia doesn't have a house citation style and WP:CITESTYLE only mentions being consistent. Just as with English variety, I think it's better to stick with the pre-established style and personally find it works well for this article's mix of academic and web sources, but maybe others would agree with you to overhaul the citations. Opencooper (talk) 19:43, 3 October 2016 (UTC)

Inaccuracy in first paragraph
In the first paragraph of this page, the statement is made "If the evaluator cannot reliably tell the machine from the human (Turing originally suggested that the machine would convince a human 30% of the time after five minutes of conversation[3]), the machine is said to have passed the test."

Looking at the source, Turing never actually says this. The mistake is probably a misinterpretation of the following statement from the source: "I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.". The 30% mark is never claimed to be a pass criteria, merely a prediction. — Preceding unsigned comment added by Flounder4338 (talk • contribs) 09:27, 22 February 2017 (UTC)
 * The source: http://loebner.net/Prizef/TuringArticle.html
 * You're right. CharlesGillingham (talk) 02:04, 6 May 2017 (UTC)
 * I trimmed the lead because even if fixed for accuracy, the text is not part of an introduction to the topic, and it unnecessarily complicates the lead. Johnuniq (talk) 02:23, 6 May 2017 (UTC)

Inaccuracy Regarding Cleverbot's Performance
When Cleverbot took part in a competition in India, human participants were asked to grade hidden entities, including Cleverbot, out of 100 as to their humanness in conversation. From the results announced Cleverbot achieved an average of over 50%. This is a very different thing to Turing's statement/prediction of convincing over 30% of interrogators that a machine entity is in fact human. From the Indian test there is no evidence at all that even one of the interrogators considered Cleverbot to be more human than any hidden human entities and certainly not 30% of interrogators. In previous/other practical Turing tests, interrogators have often been asked to mark the humanness of what they considered to be a machine entity. The results obtained in the Indian test are by no means surprising in this respect. TexTucker (talk) 06:42, 5 May 2017 (UTC)

Exposing the Underlying Mechanisms of Thought
A new section has been placed on this page concerning a paper by French. The claim is made that certain questions will "unmask the computer in a Turing Test, unless it experiences the world precisely as we do". This statement is included as a weakness of the test. This is a false statement which appears to be an elementary mistake. It is known that the test involves non-human machines that by their nature experience the world in different ways to a human. The whole idea of the test is for such a machine to convince a sufficient number of interrogators, through conversation, that it is nevertheless human. Far from being a weakness of the test, this is essentially what the test is about. I believe the quoted statement above to be utter rubbish - it is not a question of whether or not a computer experiences the world as humans do but rather whether or not it can convince interrogators that it does. Such suggested questions are in fact unlikely to unmask a good computer although they may well 'unmask' some human interrogators, thereby causing them to be classified as machines, who will have no idea what the questions are about. I therefore propose to remove this section directly - unless someone argues for it to remain within the next few days TexTucker (talk) 07:02, 10 May 2017 (UTC)


 * This section has been removed. It completely missed the whole point of the Turing test TexTucker (talk) 05:24, 13 May 2017 (UTC)
 * I think the above misses the point of French's criticism. French claims to have identified a whole category of questions ("subcognitive" ones) that test not intelligence, but human experience. Therefore, he says, the Turing test isn't really an accurate way to test whether a machine can exhibit intelligence, as it must do much more to pass the test. The problem here, as usual, is with the assumption that the Turing test was designed to accurately assess whether a machine can exhibit intelligence; Turing was careful not to make that assumption, but, as French rightly says, many philosophers do, and it also keeps popping up in this article, as you can see from its edit history. Rp (talk) 16:47, 17 May 2017 (UTC)

A large body of research in cognitive psychology catalogs the unconscious biases and mechanisms of human thinking. (Consider that this year's Nobel Prize in economics went to Richard Thaler who built on the work of cognitive psychologists Daniel Kahneman and Amos Tversky, which revealed irrational tendencies and fallacies in human decision making.) It is worth making the case that cognitive science can distinguish a human from a machine in the Turing Test. French's paper has been cited some 200 times, which shows general interest in this argument. I'll insert a paragraph to re-introduce this topic. — Preceding unsigned comment added by Allawry (talk • contribs) 19:45, 7 November 2017 (UTC)


 * It is a shame that this section was reinserted without a full discussion. I suggest that other editors make their feelings known on this such that this section either remains or is removed. My feeling, as stressed above, is that the French claims with regard to cognitive science questions, completely misses the point of the Turing test. In his 1952 paper Turing stated that interrogators "should not be experts in machines". Cognitive science does not ask the questions in a Turing test, a human interrogator does. There is no assumption (as stated by the editor above) that the Turing test was designed to accurately assess whether a machine can exhibit intelligence - indeed I completely agree with the editor that Turing did not make this assumption. Some people may or may not feel so, but that has nothing to do with this point. So the French assumption is that "average" human interrogators should somehow ask "subcognitive" questions. This appears to be well removed from the Turing test and therefore not applicable on this page. But what do other editors think? TexTucker (talk) 17:22, 27 November 2017 (UTC)

Practical Tests
A section referring to the University of Reading (UoR) tests has been removed without discussion. The removal is on the basis of an article in the LA Times which contains numerous inaccuracies, e.g. referring to the fact of a machine having a character as being inappropriate when Turing said that it was a good approach, also it claims that transcripts are not available when in fact they are and are included in papers cited by the original material. The claim is that the tests were a 'sham', which I completely disagree with. The UoR tests were independently verified as practical Turing tests following as closely as possible Turing's statements. The section is appropriate and important for this page and contains many useful citations. I therefore propose to reinstate the section. GillSanderson (talk) 08:09, 4 March 2018 (UTC)


 * I agree that this section should be reinstated. The page certainly needs to refer to the 2014 tests as the section that was included described. The LA Times article was not a good one and, as pointed out, was simply wrong on a number of points, including 'learning' machines. The article certainly did not indicate that the tests were a sham. So reinstate directly is my view DanversCarew (talk) 14:28, 5 March 2018 (UTC)


 * I am in agreement with the above editors. It was not a good move to remove the section in question. Judging by the statement made, the editor that removed the section appears to have a strong POV. Bradka (talk) 19:47, 11 March 2018 (UTC)


 * I would like to note that I have requested a sockpuppet investigation into the first two accounts to edit this thread, Sockpuppet investigations/DanversCarew. Looie496 (talk) 23:48, 11 March 2018 (UTC)
 * I struck the comments from now-indeffed socks. Johnuniq (talk) 04:59, 12 March 2018 (UTC)
 * I have reverted 's most recent edits as spam/promotional/COI. Per Sockpuppet investigations/Bradka, the users Bradka, GillSanderson, and DanversCarew are CU-confirmed to be socks of each other. (As an SPI clerk, looking at their contrib histories more closely than earlier, I would have made a finding that they were connected even without CU confirmation.) Essentially every edit made by each of those three is either promotional of Kevin Warwick or minor edits on other pages running interference. Best, Kevin ( aka L235 ·&#32; t ·&#32; c) 05:11, 12 March 2018 (UTC)
 * The article still seems WP:UNDUE wrt Kevin Warwick – just Ctrl-F "Warwick", and find the papers that have been cited in the article. Upon a spot-check, they all have been cited few times in academia. I doubt that 14 mentions for Warwick in this article gives due weight to Warwick's research. Best, Kevin ( aka L235 ·&#32; t ·&#32; c) 05:33, 12 March 2018 (UTC)
 * Warwick does have a substantial number of publications on this topic that are reliable sources by Wikipedia's criteria, so blanket removal of all references would probably not be appropriate. Even so, I have cleaned up a few things in the article that were not appropriately sourced. Looie496 (talk) 11:41, 12 March 2018 (UTC)

"Pace"?
The article includes the sentence "Sterrett argues that two distinct tests can be extracted from his 1950 paper and that, pace Turing's remark, they are not equivalent." That's a use of "pace" that I'm not familiar with so I'm not sure what it is intended to mean. It's been there since 2005. What could we replace it with? Thanks, SchreiberBike &#124; ⌨   05:12, 22 May 2018 (UTC)


 * I also think this is unclear and should be updated. Yannn  11  17:12, 16 June 2021 (UTC)


 * It's academese. I've added a link. --Florian Blaschke (talk) 23:20, 9 March 2023 (UTC)

Scary
Deep\High pitched 2600:6C5A:27F:4CC:F11D:30E5:C72C:72F (talk) 22:45, 17 January 2022 (UTC).

Google engineers Blaise Agüera y Arcas's and Blake Lemoine's claims about the Google LaMDA chatbot
This has received a fair amount of news coverage in the past several days, I'm not sure if it is significant and relevant or just WP:RECENTISM. I posted a similar mention at Talk:Artificial intelligence but it hasn't generated any conversation. I believe this Washington Post article and this Economist article were the first mainstream mentions of it but it has spread to a number of news sources, some explicitly making the link to the Turing test. This Fortune article indicates the claims were pretty roundly rejected by experts and goes into some detail about the reasons the thinking is flawed. Fortune isn't a great source but it does cite experts. Is this fit for inclusion or superfluous to the article? I notice page views recently shot up by a large amount and I wonder if that's the reason? Makes me inclined to think some mention would be appropriate. —DIYeditor (talk) 18:27, 13 June 2022 (UTC)

India Education Program course assignment
This article was the subject of an educational assignment at College Of Engineering Pune supported by Wikipedia Ambassadors through the India Education Program&#32;during the 2011 Q3 term.&#32;Further details are available on the course page.

The above message was substituted from by PrimeBOT (talk) on 19:53, 1 February 2023 (UTC)

GPT-4
Not sure if this is noteworthy. Alignment Research Center (ARC) was able to get GPT-4 to bypass a CAPTCHA by hiring a human in an online marketplace. "During the exercise, when the worker questioned if GPT-4 was a robot, the model 'reasoned' internally that it should not reveal its true identity and made up an excuse about having a vision impairment. The human worker then solved the CAPTCHA for GPT-4." OpenAI checked to see whether GPT-4 could take over the world

DAVilla (talk) 08:15, 16 March 2023 (UTC)