Talk:AI effect

Sources and ideas
I intended to write this article some time ago, but someone beat me to the punch. I have a large number of sources and ideas at User:CharlesGillingham/AI effect. I don't have time to finish this now. Is anyone else interested? CharlesGillingham (talk) 17:03, 11 June 2009 (UTC)

I dumped some of my research into the article. I didn't attempt to write it, really, just supplied a number of quotes. As I said, I have more at User:CharlesGillingham/AI effect. CharlesGillingham (talk) 18:43, 23 October 2009 (UTC)

With these changes, I don't believe the article needs to be tagged with anything other than the "expansion" tags in the sections. CharlesGillingham (talk) 19:37, 23 October 2009 (UTC)

again and again the same and the same
Someone (not me!) needs to clean this up. I counted at least 4 times that the premise of the article was stated. I mean, come on, have you not got something else to say? If not perhaps this article should be merged with AI.

--thomas — Preceding unsigned comment added by 99.113.160.78 (talk) 05:56, 13 July 2012 (UTC)

Article and getting better quality into the problem
Hi all

Firstly can I just say that I am hoping we can get a much better quality article written here...

First of all there is the problem of the definition of AI effect.


 * Artificial_intelligence says (showing the term usage in AI article)
 * Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence, sometimes described as the AI effect.


 * The Artificial Intelligence portal Did You Know section says (showing the main Wikiproject definition)
 * that the tendency to deprecate the usefulness of some human abilities after advances in artificial intelligence and robotics allows embodied agents to master these abilities is called the AI effect?


 * The AAAI organisation says (a major professional body)
 * "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software
 * (Interesting to see the members of the |presidential panel for 2008-2009)


 * Nested Universe says (showing general usage of the term)
 * The A.I. Effect describes a human cognitive bias to discount improvements made in the science of Artificial Intelligence.


 * Transwiki says
 * There exist at least two definitions of what "AI effect" means.


 * First, it is the tendency to deprecate the usefulness of some human abilities after advances in artificial intelligence and robotics allows embodied agents to master these abilities.


 * Second, it is the tendency that the great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software, as people after a while take their existence as granted, and do not consider them to be intelligent.

It seems that we need to consider the main definition we are using in the article here...at the moment we have:-


 * "The AI effect occurs when people discount the intelligent behavior of an artificial intelligence program by arguing that it's not real intelligence."

I do not believe this is correct, it seems to not agree with anything the others are saying, and in fact seems to be half made up - the bit about "arguing it's not real"

I would suggest that we amend to:-


 * "The AI effect occurs when applications of artificial intelligence are used in systems and discounted as intelligent by the users of those systems."
 * Examples of this phenomenon are "bot" characters in videogames, Chess computers and AI algorithms used for the cruise control of cars and for planning by the military and by civil governments.

Chaosdruid (talk) 23:16, 25 February 2010 (UTC)


 * Nice work collecting those definitions. The best definition above is the one which classifies it as a cognitive bias. This is stunningly accurate and I would definitely include that.


 * I agree that the current definition isn't quite right. At the very least, it doesn't read very well. But it does capture the essential important point, made by McCorduck and Brooks in the second paragraph: critics of AI have been redefining intelligence over the last fifty years, effectively moving the goalposts. Once upon a time, people thought that doing logic required intelligence. Now we know it doesn't, because 1956's Logic Theorist wasn't intelligent. Or was it? That's the AI effect. Until people see AI programs like the ones in the movies, anything machines can do isn't intelligent, by definition.


 * In your version I don't think "users of those systems" is quite right. "Critics" would be closer. Also, "used in systems" isn't necessary either. CharlesGillingham (talk) 17:21, 2 March 2010 (UTC)


 * Don't hesistate to WP:Be bold and make any changes you like. CharlesGillingham (talk) 00:20, 28 March 2010 (UTC)

Using logic to define AI effect It seems to me that the AI effect is the combination of one or more logical fallacies that are more global than this specific problem. Dismissing AI advances seems to entail the "No true Scotsman" or "Moving the goalposts" fallacies as well as simple anthropocentrism. Mathiusdragoon (talk) 00:12, 16 July 2010 (UTC)
 * As far as I am concerned I think you have realised what I was saying in the post above.There is too much going on for what definitions there were and it may be time for a new combination to accurately sum up what it is. Chaosdruid (talk) 11:44, 16 July 2010 (UTC)

The article references "Tesler's Theorem" and links to my CV where I offer this reformulation: Intelligence is whatever machines haven't done yet. That's what I believe I originally said (I think in the late sixties). Note that I was defining "intelligence", not "artificial intelligence". I closed by saying, ''Many people define humanity partly by our allegedly unique intelligence. Whatever a machine—or an animal—can do must (those people say) be something other than intelligence.'' I included animals because I have seen people refuse to acknowledge that gorillas, squirrels, whales and many other animals possess intelligence that overlaps with ours. Larry Tesler (talk) 04:25, 13 October 2010 (UTC)

Conflating goal and means
It appears to me that one of the contributing causes to the AI Effect is the distinction made (by AI critics) between systems that are informed of the goal (which would include trained neural networks) and systems that are explicitly programmed (e.g. chess computers) and thus are unaware of their goals. The main AI problem is of course that very few systems, for very few problems, can reason back from goal to means. (That's not specific to AI though, which is why chess is hard for humans too.) — Preceding unsigned comment added by 2001:980:396E:1:D86C:31F5:6801:F457 (talk) 13:30, 28 October 2013 (UTC)

Article title
The term "AI effect" is not cited, that I can see. Does this phenomenon have an established name? If so can it be cited please? And if not, can a more informative title be found for the article? &mdash; Cheers, Steelpillow (Talk)

this is not a science article yet
you don't give precise examples of how algorithms when they are not understood are thought as "maybe able to simulate anything, maybe even intelligence (*)" but once they are studied enough become just some recursive convex optimization statistical gradient approach to some simple mathematic cost function (able to resolve only a small subset of problems).

(*) recurrent neural network : "how to properly train and learn algorithms to properly train and learn algorithms to .... (recursive/unsolvable learning algorithm) ... train neural networks" — Preceding unsigned comment added by 78.227.78.135 (talk) 01:34, 19 December 2015 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 1 one external link on AI effect. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20091213020900/http://www.aaai.org:80/aitopics/pmwiki/pmwiki.php/AITopics/AIEffect to http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/AIEffect

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.— InternetArchiveBot  (Report bug) 04:25, 1 October 2016 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 external links on AI effect. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added tag to http://www.cs.uiuc.edu/~eyal/papers/ai-complete-commonsense07.pdf
 * Added archive https://web.archive.org/web/20090628081048/http://www.kurzweilai.net/articles/art0100.html?printable=1 to http://www.kurzweilai.net/articles/art0100.html?printable=1
 * Corrected formatting/usage for http://www.aaai.org/aitopics/pmwiki/pmwiki.php/AITopics/AIEffect

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 00:44, 24 June 2017 (UTC)


 * Added tag to https://engineerhacks.com/ai-bot-might-conduct-your-next-job-interview/  — Preceding unsigned comment added by Engineer hacks (talk • contribs) 17:56, 29 January 2023 (UTC)

Is this still accurate in 2018?
AI is everywhere and I think people are starting to recognize that. I don't have any citation-worthy sources to cast doubt onto this, but AI seems to be a lot more heralded and recognized and cheered on as such in the market today. --Daviddwd (talk) 05:16, 17 September 2018 (UTC)


 * But much of what is called AI is more AS (artificial stupidity). 伟思礼 (talk) 15:31, 27 June 2023 (UTC)

Obvious bias
First, sorry if I make some mistake. I am not a native english speaker. Reading the article and the comments I found that almost all the contributions are biased, maybe because the AI effect itself is a biased concept. Even when some people could behave as the article says, one of the main claims of the critics of current forms of AI is that the way it works is not intelligence in the same terms in which we talk about intelligence in human beings. The basic idea is that, while AI is based in statistic, formal logic, explicitation of all elements and short term problem solving, human intelligence (and human mind in general) has a big part of intuition, associations and other aspects (including a lot of things that we do without even think on it) that AI doesn't have. So, the thing is not that AI is doing amazing things but that this is not how homan mind works. So, the thing is not "AI is anything that has not been done yet" but what we call AI is not very related whit what we call intelligence. I think that some (not so) recent paradigms of psycology and theories of human consciousness, which have been developed in order to understand computers but the get subverted, are making us confuse terms. I think that the article could be enriched by some problematization of the concept, if I could find extra day hours I could do it, but by now I think that this comment could lead someone with more time to a search in order to expand the article with other elements (and maybe also the AI main article which also has a little bias). — Preceding unsigned comment added by 2800:A4:C3A:D700:85E:BC2:45E4:1CCF (talk) 13:22, 18 September 2018 (UTC)

LinuxDude (talk) 21:14, 7 December 2018 (UTC) I agree this article is biased. The argument 'The machine isn't thinking!' is a valid counter argument. If someone asserts this AI is thinking .. its on the person making that assertion to prove it true. Luminaries such as Sir Roger Penrose have written books on why machines can't think. So to assert the denial of machine thinking is some type of effect is circular logic presupposing one side correct and the other incorrect.


 * Agreed. This article is definitely not "neutral point of view." 伟思礼 (talk) 15:33, 27 June 2023 (UTC)

4 May 2023 This article presents as an objective phenomenon what is actually an opinion. The view that what falls under the broad umbrella of "AI" deserves to be considered a kind of "real" intelligence, maybe even with some reverence, is a subjective position on a philosophical question. Common within the AI community to be sure, but not without controversy, and an objective unbiased article should address counterpoints raised by peers of the individuals currently cited. It should also be noted that the "Legacy of the AI Winter" is now dated in only presenting the view that the term "AI" is bad for funding—over the past two years or so that seems to have completely flipped and "AI" has become a major marketing buzzword. In fact the economic structures and incentives involved in current AI development is a factor in skepticism toward the field, which I think this article should also address.

Unsigned by User:2601:8c:981:4a0:14c0:8cae:678e:c53b at 15:21, 4 May 2023 & 15:22, 4 May 2023

Other countries needed
We cover only one country's perspective here. That may be appropriate because this may be solely a localised phenomenon. Certainly I have only found sources which discuss the very same country that other editors have. In that case we should tell the reader that. Invasive Spices (talk) 16 January 2023 (UTC)


 * What do you mean? This topic isn't related to countries... — Omegatron (talk) 15:28, 7 March 2023 (UTC)
 * Hello Yes. My comment above and the template Special:Diff/1143412987 explained this. Invasive Spices (talk) 21:39, 18 March 2023 (UTC)
 * I can't speak for non-western countries, but as a European, I think the AI effect applies here as well (or even more). The article looks ok to me overall, but some claims in the section "The future" need to be softened a bit. Alenoach (talk) 01:25, 15 December 2023 (UTC)

Hatnote
Hello I did anticipate that this hatnote would be controversial. It has citations and I have never seen a hatnote with citations. Would this be acceptable if I removed them and created a  section which explains this and has the citations? Invasive Spices (talk) 18 January 2023 (UTC)
 * Hello if this material is suitable for the body of the article then what you suggest sounds good to me. Regards, Shhhnotsoloud (talk) 20:54, 19 January 2023 (UTC)
 * Perhaps this is better. Invasive Spices (talk) 19 January 2023 (UTC)

I don't think this is true anymore
It seems to me like the technology has now reached the point where this "AI effect" no longer holds true. I realize that the article shouldn't be changed to reflect that unless some sources can be cited though.  flarn 2006  [u t c] time: 04:00, 5 February 2023 (UTC)


 * Still true: https://twitter.com/smdiehl/status/1627476629441376257?lang=en — Omegatron (talk) 15:27, 7 March 2023 (UTC)


 * Thanks!  flarn 2006  [u t c] time: 17:29, 19 March 2023 (UTC)
 * It still holds true. For example, people discount GPT's as not true intelligence because they are "just predicting the next word." DontLikeRedesign (talk) 14:24, 26 April 2023 (UTC)
 * It has been my observation that it is far more true now than it has ever been. I routinely see articles in which people explain why LLMs are not "real" AI (whatever that means) and we shouldn't use that term when describing them. However I will say that the tweet above is not necessarily an example - the tweet does not mention the term AI/artificial intelligence at all. It is possible to claim that LLMs are AI while also saying they are not sentient etc. --Cyllel (talk) 13:24, 1 May 2023 (UTC)
 * The Turing test was proposed as a way to “judge” AI, but ELIZA (1966) allegedly passed. I encountered ELIZA in the 1980s, and it was obvious to me that it was merely rewording my input.  I recently tried out ChatGPT.  In its three-paragraph response to my one-paragraph input, the first paragraph was sufficient to see that it was off on a topic completely unrelated to my input.  (And the others continued on that topic).  Most humans would not have done that, though a few might.  But the Turing failure would be clinched by the third paragraph being literally verbatim (letter by letter) identical to the first.  But whether ChatGPT and its competitors are AI is not as important as the fact that they successfully emulate two unfortunate human traits: how easily they say things that aren’t true, and their tendency to tell you what you want to hear. 伟思礼 (talk) 15:51, 27 June 2023 (UTC)

Wiki Education assignment: Research Process and Methodology - SU23 - Sect 200 - Thu
— Assignment last updated by NoemieCY (talk) 22:51, 30 June 2023 (UTC)