User:Nchan103/Stochastic parrot

In machine learning, the term stochastic parrot is a metaphor to describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process. The term stochastic parrot was coined by University of Washington researcher Emily M. Bender in her paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" co-authored by Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. There is currently no consensus on whether LLMs are stochastic parrots, with 51% of researchers believing that AI can understand language, while 49% disagree, but people have continued to use this term to describe such systems.

Origin and definition
According to linguist Ben Zimmer, the word “stochastic” derives from the ancient Greek word “stokhastikos” meaning “based on guesswork,” or “randomly determined.” The word "parrot" refers to the idea that large language models (LLMs) that merely repeat words without understanding their meaning.

In their paper, Bender et al. argue that LLMs are probabilistically linking words and sentences together without considering meaning. Therefore, they are labeled to be mere "stochastic parrots."

According to Lindholm, et. al., the analogy highlights two vital limitations:


 * LLMs are limited by the data they are trained by and are simply stochastically repeating contents of datasets.
 * Because they are simply making outputs based on training data, LLMs do not understand if they are saying something incorrect or inappropriate.

Lindholm, et. al. note that because poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".

Subsequent usage
The term coined by Bender et. al. is a neologism used by AI skeptics to insult to refer to machines' lack of understanding of the meaning of their outputs and is sometimes interpreted as a "slur against AI." The neologism's use expanded further when Sam Altman, CEO of Open AI, used the term ironically when he tweeted, "i am a stochastic parrot and so r u." The term was then designated to be the 2023 AI-related Word of the Year for the American Dialect Society, beating out the words "ChatGPT" and "LLM."

The phrase is often referenced by some researchers to describe LLMs as pattern matchers that can generate plausible human-like text through their vast amount of training data. However, other researchers argue that LLMs are, in fact, able to understand language. While others argue they lack true understanding or the ability to reason, merely “parroting” in a stochastic fashion.

Debate
Some LLMs, such as ChatGPT, have become capable of interacting with users in convincingly human-like conversations. The development of these new systems has deepened the discussion of the extent to which LLMs are simply “parroting.”

In the mind of a human being, words and language correspond to things one has experienced. For LLMs, words correspond only to other words and patterns of usage fed into their training data. Proponents of the idea of stochastic parrots thus conclude that LLMs are incapable of actually understanding language.

The tendency of LLMs to pass off fake information as fact is held as support. Called hallucinations, LLMs will occasionally synthesize information that matches some pattern, but not reality. That LLMs can’t distinguish fact and fiction leads to the claim that they can’t understand language at all. Further, LLMs often fail to decipher complex or ambiguous grammar cases that rely on understanding the meaning of language. As an example, borrowing from Saba, is the prompt:

''“The wet newspaper that fell down off the table is my favorite newspaper. But now that my favorite newspaper fired the editor I might not like reading it anymore. Can I replace ‘my favorite newspaper’ by ‘the wet newspaper that fell down off the table’ in the second sentence?”''

LLMs respond to this in the affirmative, not understanding that the meaning of the newspaper is different in these two contexts. Based on these failures, some AI professionals conclude they are no more than stochastic parrots.

However, there is support for the claim that LLMs are more. LLMs do pass many tests for understanding well, such as Super General Language Understanding Evaluation (SuperGLUE). Tests such as these and the smoothness of many LLM responses help as many as 51% of AI professionals believe they can truly understand language with enough data, according to 2022 survey.

Another technique which has been applied to show this is termed "mechanistic interpretability". The idea is to reverse-engineer a large language model by discovering symbolic algorithms that approximate the inference performed by LLM. One example is Othello-GPT, where a small transformer is trained to predict legal Othello moves. It is found that there is a linear representation of Othello board, and modifying the representation changes the predicted legal Othello moves in the correct way. In another example, a small Transformer is trained on Karel programs.

Similar to the Othello-GPT example, there is a linear representation of Karel program semantics, and modifying the representation changes output in the correct way. The model also generates correct programs that are on average shorter than those in the training set.

However, when tests created to test people for language comprehension are used to test LLMs, they sometimes result in false positives caused by spurious correlations within text data. Models have shown examples of shortcut learning, which is when a system makes unrelated correlations within data instead of using human-like understanding. One such experiment tested Google’s BERT LLM using the argument reasoning comprehension task. They asked it to choose between 2 statements, which is more consistent with an argument. Below is an example of one of these prompts:

"Argument: Felons should be allowed to vote. A person who stole a car at 17 should not be barred from being a full citizen for life.

Statement A: Grand theft auto is a felony.

Statement B: Grand theft auto is not a felony."

Researchers found that specific words such as “not” hint the model towards the correct answer, allowing near-perfect scores when included but resulting in random selection when hint words were removed. This problem, and the known difficulties defining intelligence, causes some to argue all benchmarks that find understanding in LLMs are flawed, allowing shortcuts to fake understanding.

Without a reliable benchmark, researchers have found difficulties differentiating models between stochastic parrots and entities capable of understanding. When experimenting on ChatGPT-3, one scientist argued that the model was in between true human-like understanding and being a stochastic parrot. He found that the model was coherent and informative when attempting to predict future events based on the information in the prompt. ChatGPT-3 was frequently able to parse subtextual information from text prompts as well. However, the model frequently failed when tasked with logic and reasoning, especially when these prompts involved spatial awareness. The model’s varying quality of responses indicates that LLM models may have a form of “understanding” in certain categories of tasks while acting as a stochastic parrot in others.