Wikipedia:Why you shouldn't write articles with ChatGPT, according to ChatGPT

Wikipedia is an open, collaboratively edited encyclopedia that aims to represent verifiable facts and present a neutral point of view. While AI systems have advanced in natural language generation, using them to automatically generate or contribute entire Wikipedia articles poses some challenges that could undermine Wikipedia's collaborative, factual and neutral standards if not addressed carefully.

Reliability of information
ChatGPT is an AI assistant created by OpenAI to be helpful, harmless, and honest. However, as an AI system, ChatGPT does not have perfect knowledge and cannot guarantee the reliability or objectivity of all the information it provides. When writing for Wikipedia, reliability is paramount. Articles must rely on credible, verifiable sources, not speculation from an AI system.

Lack of references
Information provided by ChatGPT is not directly cited or referenced. Wikipedia has strict policies that all content must be verifiable through appropriate published secondary sources. Relying on uncited information from ChatGPT could compromise Wikipedia's standards of verifiability.

Lack of expertise, context and accountability
AI systems today are based on general language models trained on vast amounts of text, but they do not have deep expertise in specific topics. Generated content may contain factual inaccuracies, insufficient context or nuanced perspectives that an expert human editor could catch and correct. There is also no single human contributor whom other editors could contact to verify or improve generated content.

Potential bias and lack of neutral point of view
AI systems can unintentionally replicate and even amplify underlying biases in their training data. Generated articles may lack a neutral point of view and represent a biased perspective without intending to. However, unlike human contributors, there is no explicit accountability for potential biases introduced through AI generation.

Copyright and attribution issues
Large language models are trained on huge caches of text from the open internet, including copyrighted works. Automatically generating new text risks incorporating substantial portions of prior works without proper attribution or permission. This undermines Wikipedias collaborative, consensus-based editing process.

In conclusion
While AI assistants like ChatGPT can provide information to help start the research process, directly using its uncited, unverifiable outputs for Wikipedia articles risks compromising our project's standards of reliability, references, collaborative editing, and ongoing improveability. ChatGPT was created by OpenAI to be helpful, but as an AI it cannot guarantee truthful, complete or consistently high-quality information over time. For these reasons, ChatGPT outputs are not recommended as the sole or primary basis for new Wikipedia articles. Human researchers are still needed to verify information from multiple published secondary sources, structure articles, and incorporate the feedback of the Wikipedia community through its collaborative editing process. Maintaining these high standards is important for Wikipedia to fulfill its mission of providing a free, reliable source of knowledge to all.

Alright, in all seriousness
Don't. ChatGPT doesn't have references and creates hoaxes, because it doesn't have a concept of truth and verifiability. And before you ask, yes, this applies for Bing and Copilot as well. (Try asking it for the history of the 2025 American-Italian Pizza Wars. You will be entertained.) If you do go ahead, what will happen is that someone will find out, you will get blocked, and lots of volunteer editors will groan at the amount of unnecessary work they now have to do to clean up after you.