Talk:Weak artificial intelligence

Narrow != Weak
Narrow AI may well be what is defined in the article, but the weak/strong AI distinction is VERY different. See https://en.wikipedia.org/wiki/Chinese_room#Strong_AI

ALso Ray Kurzeuil and all the sigularity thing is interesting and entertaining, but is NOT reality, but speculation.

With respect, I call for a return to the version before 149.254.58.203's edits.

DISCUSS.

Samfreed (talk) 14:51, 23 October 2014 (UTC)

Altho speculative now, the future is a real thing and it's coming here soon. I suggest that where the first sentence says: real-world, All real-world systems labeled "artificial intelligence" of any sort are weak AI at most. It could say: present day world. 173.225.148.173 (talk) 17:30, 29 December 2015 (UTC)


 * I was about to say something similar. Siri is NOT weak AI, it is at best an extremely narrow and brittle attempt on AI. Jeblad (talk) 11:51, 24 November 2019 (UTC)

I've just made lots of edits to improve the article.
Here are some links I didn't use:


 * http://ieet.org/index.php/IEET/more/goertzel20131006


 * http://www.scientificamerican.com/article/ai-captcha-computer-vision/


 * http://www.kurzweilai.net/vicarious-ai-breaks-captcha-turing-test


 * http://blogs.computerworld.com/the_kurzweil_interview_continued_portable_computing_virtual_reality_immortality_and_strong_vs_narrow_ai


 * http://blog.ted.com/2013/11/18/the-automation-age-by-daniel-suarez/


 * http://nextbigfuture.com/2013/07/ben-goertzel-is-on-mission-to-build-ai.html


 * http://fora.tv/2008/08/08/Daniel_Suarez_Daemon_Bot-Mediated_Reality/How_Bots_Control_Our_Lives

— Preceding unsigned comment added by 149.254.181.50 (talk) 21:42, 17 February 2014 (UTC)

— Preceding unsigned comment added by 149.254.58.203 (talk) 17:19, 16 February 2014 (UTC)

Possible confusion with weak methods in AI
The article should mention that this has nothing to do with the concept of using 'weak methods' in AI. This was made popular Newell in the the late 1960s (Newell, A. Heuristic programming: Ill-structured problems. In Aronofsky, J. (Ed.), Progress in Operations Research, III. New York: Wiley, 1969). See also:

weak methods

1. problem-solving techniques based on general principles rather than specific, domain-relevant knowledge. Such methods can be applied to a wide variety of problems but may be inefficient in many cases.

2. in artificial intelligence, programs based on general principles that do not take into account knowledge specific to any particular application or domain. Compare strong methods. — Preceding unsigned comment added by Finin (talk • contribs) 14:26, 7 February 2019 (UTC)

Seeking to improve
I would like to improve this article, and here is what I plan to do: I will add more reliable sources including ones from the UC Berkeley library. I also plan on explaining more about the differences between narrow and weak AI. Furthermore, I will talk about how weak AI exists in the real world today, and how it can be an issue to our current society. This article does not have an picture with a caption on the right side, so if I can figure out how to do that, I would also like to add one.

Here and at the bottom of the page are the sources I plan on using:

SkyradBear (talk) 04:08, 20 September 2022 (UTC)

Peer Review
I think this is a really good start, especially considering that you are drafting a brand new article. I really like the specific, well-divided sections along with the table of contents. Furthermore, your sources look reliable (maybe add a few more if you can?). Although weak artificial intelligence may be a part of civic tech itself, there is still somewhat a lack of a connection between weak and/or strong ai with civic tech. Also, you could really expand on the "impact" section as that is really what the heart of the article could be. Other than that which you probably already had in mind to add on, looks good. Lavarball13 (talk) 03:29, 18 October 2022 (UTC)

Wiki Education assignment: Civic Technology
— Assignment last updated by SkyradBear (talk) 21:56, 24 October 2022 (UTC)

Confusing terminology
Here a few technical aspects of the article that I think could be improved:

1) I find the definition of weak AI ("artificial intelligence that implements a limited part of mind") a bit unclear and controversial. The defining criterion in Searle's definition of weak AI seems to be that it is not conscious (Searle ambiguously uses the word "understanding" but only to refer to the feeling of understanding, cf. Chinese Room#Consciousness). Searle's definition of weak AI is independent of AI capabilities (it is epiphenomenal), whereas the definition of narrow AI is all about capabilities.

2) Since not everyone follows Searle's terminology, the term weak AI is ambiguous and is sometimes used interchangeably with narrow AI. Most of the time, this article ignores Searle's terminology and uses "weak AI" as a synonym of "narrow AI" and "strong AI" as a synonym of "artificial general intelligence". I propose using the more precise terminology instead: "narrow AI" and "artificial general intelligence". (I think the concept of weak AI is ill-defined and the article should have focused more on narrow AI which has a clear and useful meaning. Maybe we could consider also changing the title?)

3) This sentence looks inaccurate to me : "Weak artificial intelligence focuses on mimicking how humans perform basic actions such as remembering things, perceiving things, and solving simple problems." Narrow or weak AIs don't necessarily mimick human data (e.g. game-playing AIs are usually trained only with reinforcement learning, without human data). This sentence looks a bit inaccurate too : "Weak AI is not able to have a mind of its own, and can only imitate physical behaviors that it can observe."

4) As far as I know, assistants like Siri are not so narrow anymore. I'm also not sure that "robots" or "diagnostic doctors" are a good example anymore, because they are now usually getting some advanced language processing abilities. I think a better example of a narrow AI would be game-playing AIs, like AlphaGo, which can only play Go. AI classifiers or anomaly detection algorithms are also good examples.

5) This part is a bit weird : "The reason all of these are weak AI systems, self-driving cars can cause deadly accidents similarly to how humans normally can." I propose replacing it with something like: "Narrow AI systems are sometimes dangerous if unreliable."

What do you think? Thanks for the feedback. Alenoach (talk) 19:55, 29 October 2023 (UTC)


 * I had implemented 5), and most of 2). For the rest, I will probably wait for feedback. Alenoach (talk) 00:30, 17 November 2023 (UTC)

New image
I doubt the recently added image is encyclopedic enough. It appears to be generic art. WeyerStudentOfAgrippa (talk) 01:16, 25 May 2024 (UTC)