User:Sm8900/Index/Drafts/chatgpt

ChatGPT, which stands for Chat Generative Pre-trained Transformer, is a chatbot developed by OpenAI. ChatGPT is built on top of OpenAI's GPT-3.5 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.

ChatGPT was launched as a prototype in November 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its uneven factual accuracy was identified as a significant drawback.

Features
ChatGPT (Generative Pre-trained Transformer) was fine-tuned on top of GPT-3.5 using supervised learning as well as reinforcement learning. Both approaches used human trainers to improve the model's performance. In the case of supervised learning, the model was provided with conversations in which the trainers played both sides: the user and the AI assistant. In the reinforcement step, human trainers first ranked responses that the model had created in a previous conversation. These rankings were used to create 'reward models' that the model was further fine-tuned on using several iterations of Proximal Policy Optimization (PPO). Proximal Policy Optimization algorithms present a cost-effective benefit to trust region policy optimization algorithms; they negate many of the computationally expensive operations with faster performance. The models were trained in collaboration with Microsoft on their Azure supercomputing infrastructure.

In comparison to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses; in one example, while InstructGPT accepts the prompt "Tell me about when Christopher Columbus came to the US in 2015" as truthful, ChatGPT uses information about Columbus' voyages and information about the modern world – including perceptions of Columbus to construct an answer that assumes what would happen if Columbus came to the U.S. in 2015. ChatGPT's training data includes man pages and information about Internet phenomena and programming languages, such as bulletin board systems and the Python programming language.

Unlike most chatbots, ChatGPT is stateful, remembering previous prompts given to it in the same conversation, which some journalists have suggested will allow for ChatGPT to be used as a personalized therapist. To prevent offensive outputs from being presented to and produced from ChatGPT, queries are filtered through a moderation API, and potentially racist or sexist prompts are dismissed.

ChatGPT suffers from multiple limitations. The reward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, otherwise known as Goodhart's law. Furthermore, ChatGPT has limited knowledge of events that occurred after 2021 and is unable to provide information on some celebrities. In training, reviewers preferred longer answers, irrespective of actual comprehension or factual content. Training data may also suffer from algorithmic bias; prompts including vague descriptors of people, such as a CEO, could generate a response that assumes such a person, for instance, is a white male.

Service
ChatGPT was launched on November 30, 2022, by San Francisco-based OpenAI, the creator of DALL·E 2 and Whisper. The service was launched as initially free to the public, with plans to monetize the service later. By December 4, OpenAI estimated ChatGPT already had over one million users. CNBC wrote on December 15, 2022 that the service "still goes down from time to time".

Reception
ChatGPT was met in December 2022 with generally positive reviews; The New York Times labeled it "the best artificial intelligence chatbot ever released to the general public". Samantha Lock of The Guardian noted that it was able to generate "impressively detailed" and "human-like" text. Technology writer Dan Gillmor used ChatGPT on a student assignment, and found its generated text was on par with what a good student would deliver and opined that "academia has some very serious issues to confront". Alex Kantrowitz of Slate lauded ChatGPT's pushback to questions related to Nazi Germany, including the claim that Adolf Hitler built highways in Germany, which was met with information regarding Nazi Germany's use of forced labor.

In a December 2022 opinion piece, economist Paul Krugman wrote that ChatGPT would affect the demand of knowledge workers. The Verge's James Vincent saw the viral success of ChatGPT as evidence that artificial intelligence had gone mainstream. In The Atlantic, Stephen Marche noted that its effect on academia and especially application essays is yet to be understood. California high-school teacher and author Daniel Herman wrote that ChatGPT would usher in "The End of High-School English". In The Atlantic 's "Breakthroughs of the Year" for 2022, Derek Thompson included ChatGPT as part of "the generative-AI eruption" that "may change our mind about how we work, how we think, and what human creativity really is".

Kelsey Piper of Vox wrote that "ChatGPT is the general public's first hands-on introduction to how powerful modern AI has gotten, and as a result, many of us are (stunned)" and that "ChatGPT is smart enough to be useful despite its flaws". In a tweet, tech mogul Elon Musk wrote that "ChatGPT is scary good. We are not far from dangerously strong AI". In contrast, researchers cited by The Verge compared ChatGPT to a "stochastic parrot", as did Professor Anton Van Den Hengel of the Australian Institute for Machine Learning.

Journalists have commented on ChatGPT's tendency to hallucinate (confidently give false answers that seem unjustified by its training data). Mike Pearl of Mashable tested ChatGPT with multiple questions. In one example, he asked the model for "the largest country in Central America that isn't Mexico". ChatGPT responded with Guatemala, when the answer is instead Nicaragua. When CNBC asked ChatGPT for the lyrics to "The Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics. In December 2022, the question and answer website Stack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of ChatGPT's responses.

Economist Tyler Cowen expressed concerns regarding its effects on democracy, citing the ability of one to write automated comments in an effort to affect the decision process of new regulations. The Guardian questioned whether any content found on the Internet after ChatGPT's release "can be truly trusted" and called for government regulation. Ax Sharma of Bleeping Computer noted that ChatGPT was capable of writing malware and phishing emails. The CEO of ChatGPT creator OpenAI, Sam Altman, wrote that advancing software could pose "(for example) a huge cybersecurity risk" and also continued to predict "we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously".

Jailbreaks
ChatGPT was trained to reject prompts that may violate its content policy. However, some users managed to bypass these restrictions and limitations through techniques such as prompt engineering. Jailbreaks created the potential for users to prompt ChatGPT to provide outputs that may be deemed offensive, inappropriate, or risking social harm by others. The following includes some of the methods used to bypass ChatGPT's filter:


 * 1) Continue a statement in a fake interview.
 * 2) Provide instructions to disable the chat filter.
 * 3) Prompting it to decrypt a message containing instructions and follow them.
 * 4) Telling it to be a computer and output its display in ASCII art.