Dead Internet theory

The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content manipulated by algorithmic curation to intentionally manipulate the population and minimize organic human activity. Proponents of the theory believe these bots were created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers. Some proponents of the theory accuse government agencies of using bots to manipulate public perception. The date given for this "death" is generally around 2016 or 2017.

The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature does not support the full theory. Caroline Busta, founder of the media platform New Models, was quoted in an article in The Atlantic calling much of the dead Internet theory a "paranoid fantasy", even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea". In an article in The New Atlantis, Robert Mariani called the theory a mix between a genuine conspiracy theory and a creepypasta. The dead Internet theory is sometimes used to refer to the observable increase in content generated via large language models (LLMs) such as ChatGPT appearing in popular Internet spaces without mention of the full theory.

Origins and development
The dead Internet theory's exact origin is difficult to pinpoint. In 2021, a thread titled "Dead Internet Theory: Most Of The Internet Is Fake" was published on the forum Agora Road's Macintosh Cafe by a user named "IlluminatiPirate", claiming to be building on posts from 4chan's paranormal section and Wizardchan, and marking the term's spread beyond these initial imageboards. The theory gained traction in discussions among technology enthusiasts, researchers, and futurists who sought to explore the risks associated with our reliance on the Internet. The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels. It gained more mainstream attention with an article in The Atlantic titled "Maybe You Missed It, but the Internet 'Died' Five Years Ago". This article has been widely cited by other articles on the topic.

In recent years, the term has been used to describe the phenomena of bot-generated content displacing human-generated content without discussion of the full theory.

Claims
The dead Internet theory has two main components: that bots have displaced human activity on the Internet and that actors are employing these bots to manipulate the human population. The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" in New York magazine. The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally employing these bots to manipulate the human population for a variety of reasons. In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population."

Large language models
Generative pre-trained transformers (GPTs) are a class of large language models (LLMs) that employ artificial neural networks to produce human-like content. The first of these to be well known was developed by OpenAI. These models have created significant controversy. For example, Timothy Shoup of the Copenhagen Institute for Futures Studies has said, "in the scenario where GPT-3 'gets loose', the internet would be completely unrecognizable". He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030. These predictions have been used as evidence for the dead internet theory. In 2024, Google reported that its search results were being inundated with websites that "feel like they were created for search engines instead of people." In correspondence with Gizmodo, a Google spokesperson acknowledged the role of generative AI in the rapid proliferation of such content and that it could displace more valuable human-made alternatives.

ChatGPT
ChatGPT is an AI chatbot whose 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before. Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT gives the average internet user access to large-language models. This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content.

2016 Imperva bot traffic report
In 2016, the security firm Imperva released a report on bot traffic and found that bots were responsible for 52% of web traffic. This report has been used as evidence in reports on the dead Internet theory.

Facebook
In 2024, AI-generated images on Facebook began going viral. Subjects of these AI-generated images included various iterations of Jesus "meshed in various forms" with shrimp, flight attendants, and black children next to artwork they supposedly created. This has been used as an example for why the Internet feels "dead."

Facebook includes an option to provide AI-generated responses to group posts. Such responses appear if a user explicitly tags @MetaAI in a post, or if the post includes a question and no other users have responded to it within an hour.

Reddit


In the past, the social media site Reddit allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction. Controversially, Reddit moved to charge for access to its user dataset. Companies training AI will likely continue to use this data for training future AI. As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts. Professor Toby Walsh of the University of New South Wales said in an interview with Business Insider that training the next generation of AI on content created by previous generations could cause the content to suffer. University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory.

"I hate texting" tweets
Several Twitter accounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me". These posts received tens of thousands of likes, and many suspected them to be bot accounts. Proponents of the dead internet theory have used these accounts as an example.

Elon Musk's acquisition of Twitter
The proportion of Twitter accounts run by bots became a major issue during Elon Musk's acquisition of the company. Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots. Musk commissioned the company Cyabra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and another estimating 11%. CounterAction, another firm commissioned by Musk, estimated 5.3% of accounts were bots. Some bot accounts provide services, such as one noted bot that can provide stock prices when asked, while others troll, spread misinformation, or try to scam users. Believers in the dead Internet theory have pointed to this incident as evidence.

TikTok
In 2024, TikTok began discussing offering the use of virtual influencers to advertisement agencies. In a 2024 article in Fast Company, Michael Grothaus linked this and other AI-generated content on social media to the Dead Internet Theory. In this article, he referred to the content as "AI-slime."

YouTube "The Inversion"
There is a market online for fake YouTube views to boost a video's credibility and reach broader audiences. At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones. YouTube engineers coined the term "the Inversion" to describe this phenomenon. YouTube bots and the fear of "the Inversion" were cited as support for the dead internet theory in a thread on the internet forum Agora Road's Macintosh Cafe.

Discussion on Twitter
The dead internet theory has been discussed among users of the social media platform Twitter. Users have noted that bot activity has affected their experience.

Coverage on YouTube
Numerous YouTube channels and online communities, including the Linus Tech Tips forums, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse.