Talk:Stochastic parrot

POV
I think that a bit of re-examination may be warranted for this article. I think a major issue is that -- well -- is it true? The article just repeats a bunch of conjectural claims as though they're fact, without any real evidence for or against. To wit: LLMs "ultimately do not truly understand the meaning of the language they are processing" -- well, okay, what does it mean to "understand"? What does it mean to "truly understand"? Do we have any evidence that somebody has empirically tested this claim by coming up with an operationalized definition of "understanding" and then measuring whether or not these models do it? If not, then it is not a real thing; it's a hypothesis.

Compare: "A bloofgorf is a Wikipedia editor who has rough green skin and lives under a bridge and likes to argue on talk pages and is stupid". Wouldn't it be pretty relevant whether or not anyone had ever seen one of these, or taken a picture of it, etc? We would want to see some evidence that these things really existed, or that someone at least had a reason for thinking that they existed besides publishing a paper saying "Wouldn't it be crazy if these were real???". jp×g🗯️ 06:12, 17 January 2024 (UTC)


 * You tags are both inappropriate. All the AI producers acknowledge that the LLMs do not actually understand language. Most of the sources are not the original article, which is cited only twice while there are 13 other sources! Skyerise (talk) 11:26, 17 January 2024 (UTC)
 * Hi, this is an editor Chinese Wikipedia. This article is being translated into the Chinese Wikipedia. However, personally, I have some difficulties in following the Debate section. It looks the opinions are scattered and paragraph-to-paragraph is not so well connected.
 * For example, two paragraphs are discussing the  linear representation  . Take  Othello-GPT  as an example, I understand the technical side of  Othello-GPT  . I think that main point of put it here is try to support LLM can have some knowledge of space (physical world, 8x8 board). The sentences in this article made me lost by mentioning  linear representation  and mechanistic interpretability.
 * Then some paragraphs discussing ‘’shortcut learning‘’, which to me are little bit diverged from the main topic (randomly generated or really understand things).
 * I really appreciate your efforts in editing this topic. May I ask if the debate part can be further divided in some subsections with strong and clear opinions? Peach Blossom 20:21, 24 May 2024 (UTC)
 * I added subheadings and clarified the paragraph with Othello-GPT, thanks for reporting these issues. Alenoach (talk) 02:30, 25 May 2024 (UTC)
 * My "understanding" is that the consensus in the field is that a LLM cannot do problem solving, and therefore is not said to "understand", but that does get into the Turing test and definition of "understanding" and no, I don't have sources saying what the consensus in the field is. (though maybe some could be found, otherwise "opinion" in the lead works). --- Avatar317 (talk) 01:08, 18 January 2024 (UTC)
 * I don't think it is correct to say that LLMs cannot do problem solving, or that there is a consensus to that effect. GPT-4 can add two integers of many digits.  While this is not particularly an argument for understanding (any more than one claims a calculator understands), it is certainly a solution to a problem.  It also would seem to undermine the Parroting argument.  It is unlikely to have memorized all integer combinations.
 * The article seems to be structured as Stochastic Parroting vs Understanding, this seems like a false dichotomy, neither may be true.  A calculator has neither for example, and yet problem solves. 161.29.24.79 (talk) 21:54, 22 May 2024 (UTC)

Wiki Education assignment: WRIT 340 for Engineers - Spring 2024
— Assignment last updated by 1namesake1 (talk) 23:24, 18 March 2024 (UTC)


 * Education editors are advised not to change the referencing style. Once works are listed in a "Works cited" section and linked using templates (this is called CS1 style referencing), they should never be moved back into ref tags inline in the article. Since the first student editor  did this, I have had to revert all the subsequent additions. I may try to fix them later, but it is the responsibilty of new editors to educate themselves about referencing styles and rules before editing articles. Skyerise (talk) 12:18, 6 April 2024 (UTC)
 * Thank you for helping us with our contributions; we will do our best to apply what we've learned and not let this happen going forward. 1namesake1 (talk) 16:48, 11 April 2024 (UTC)
 * Great. Thanks! Skyerise (talk) 19:03, 11 April 2024 (UTC)