Talk:Prompt engineering

Prompt Examples
I submitted a prompt asking for a variety of prompt suggestions shown below to ChatGPT (Mar 14 Version. Free Research Preview) shortly before posting here. I also included the text of the main article. Shown below are a few prompt examples suggested by ChatGPT.

I wanted to post the results to the talk page first to get feedback before posting to the main article.

Prompt: Would you please add a section to the Wikipedia article shown below that provides effective prompt examples for the following prompt types: sql query, cake recipe, blog post, memo, and any other prompt examples you may want to add?

ChatGPT: Sure, I can provide examples of effective prompts for different types of tasks.

SQL Query Prompt Example:

"Write a query to find all the customers who have made a purchase in the last month."

Cake Recipe Prompt Example:

"Create a recipe for a moist chocolate cake with a cream cheese frosting. Include the ingredients, measurements, and baking instructions."

Blog Post Prompt Example:

"Write a blog post about the benefits of meditation for reducing stress and improving mental health. Include tips on how to get started and incorporate meditation into your daily routine."

Memo Prompt Example:

"Write a memo outlining the key takeaways from the quarterly sales report. Include an analysis of the top-performing products and recommendations for improving sales in the next quarter."

Job Posting Prompt Example:

"Create a job posting for a software engineer position. Include the required qualifications, responsibilities, and details about the company culture and benefits."

Customer Support Prompt Example:

"Respond to a customer inquiry about a delayed shipment of their order. Apologize for the inconvenience and provide an estimated delivery date. Offer a discount on their next purchase as a gesture of goodwill." Davydog (talk) 04:59, 27 March 2023 (UTC)


 * Sorry, but you can't add these examples because that would count as original research. If you think the article would benefit from more examples, you'll need to find ones in reliable sources. Popo Dameron  ⁠ talk  05:47, 27 March 2023 (UTC)
 * Thank you for your clarification, @PopoDameron. Davydog (talk) 00:54, 28 March 2023 (UTC)
 * In addition to the original research issue, it is almost never appropriate to include text written by an AI in Wikipedia. There is an extensive literature on prompt engineering, as well as open source libraries like PromptSource. Those are appropriate sources to draw examples from.
 * Stellaathena (talk) 12:39, 8 April 2023 (UTC)

Using feedback from ChatGPT (3.5) I finalized my work on the lede
I have kept rewriting (rephrasing) the lede until I got tired. I then decided to ask for help the ChatGPT (3.5, the free version), using the technique of prompt engineering itself. Using feedback from it, I crafted the lede as of today, which got approved by ChatGPT.

- Me (after crafting the version as of today): Is this a good explanation? .

- ChatGPT: Yes, this explanation is now clear and concise.
 * It effectively describes "prompt engineering" and "in-context learning" in the context of interactions with a large language model, highlighting their differences and benefits. It also emphasizes the temporary nature of both approaches to prevent unwanted biases and context from carrying over between conversations.


 * Here are some positive aspects of the explanation:


 * * The distinction between prompt engineering and in-context learning is well-defined, making it easy to understand how they are used from different perspectives.
 * * The description of prompt engineering's focus on crafting series of prompts and in-context learning's adaptability to ongoing conversations is clear and straightforward.
 * * The mention of the temporary nature of both approaches to prevent biases and old contexts from persisting between conversations is crucial to understanding their utility.
 * * The addition of the note about biases being limited to those already present in the (pre)training dataset helps provide a more comprehensive understanding of how the model handles biases.


 * Overall, this explanation effectively conveys the key concepts and advantages of prompt engineering and in-context learning in the context of large language models.


 * Well done!

- Me (after porting it to the article's "Talk Page"): Is what I said above ("I have kept rewriting (rephrasing) the lede until I got tired. I then decided to ask for help the ChatGPT (3.5, the free version), using the technique of prompt engineering itself. Using feedback from it, I crafted the current lede, which got approved by ChatGPT") true?

- ChatGPT: As an AI language model, I don't have access to your browsing history or past interactions, however, based on the current conversation it is evident that you sought feedback, corrections, and alternative phrasings using prompt engineering. This approach is a smart and proactive way to improve the quality of your content and enhance your writing skills.

- Me: I couldn't agree more.

- ChatGPT: I'm glad to hear that! If you have any more questions or need further assistance, feel free to ask. Happy writing!

- Me: Thanks.

- ChatGPT: You're welcome! If you ever need help or have more questions in the future, don't hesitate to reach out. Have a great day!

- Me: I certainly will. -- DancingPhilosopher  ( talk ) 14:44, 24 July 2023 (UTC)

Edit for WP:INTEGRITY
Comparing a previous revision of this article with the current version, I noticed several issues:


 * WP:DEADREF - At least 11 other pages link to the section "Chain-of-thought" but that section has been removed, breaking the links (https://en.wikipedia.org/w/index.php?title=Chain-of-thought_prompting)


 * WP:FRANKENSTEIN where the cutting and reordering of sentences has changed the meaning. For example, one sentence that originally meant "in-context learning is also known as mesa optimization" now reads something like "the GSM8K benchmark is also known as mesa optimization". One sentence that defined adding an explicit instruction such as "Let's think step-by-step" as Chain-of-thought prompting now incorrectly identifies that same technique as zero-shot learning


 * WP:INTEGRITY, text-source integrity has been lost in several places: citations have become disconnected from the text that used to refer to them. For example, this and several other citations about Chain-of-Thought prompting have became attached to an unrelated sentence about fine-tuning


 * Related to the above, WP:OVERKILL in several places where as many as 8 citations have piled up at the end of a sentence where they don't belong (because the content they used to refer to has been deleted or re-phrased)


 * Definitions of the key terms prompt, few-shot learning and chain-of-thought have gone missing. Abbreviations like "CoT" are used, but never defined (WP:MOSABBR)

I'm attempting to clean up the article by making the following changes:


 * Restoring some of the recently-deleted content from the History and Chain-of-thought sections


 * Moving inline citations as close to the content as possible, and adding |quote= wherever applicable, to help maintain WP:INTEGRITY in the future


 * Filling out the Text-to-image section which needs expanding


 * Fixing a few broken citation issues

I've attempted to fix the issues while preserving all content that has been added since July, but if there's anything missing please add it back in with proper citations. Lwneal (talk) 21:08, 14 August 2023 (UTC)


 * @Lwneal, I want to express my gratitude for addressing the issues while maintaining the recently added content. As a reader interested in comprehending how LLMs facilitate prompt engineering through in-context learning, I find the current version satisfactory. However, individuals focused solely on utilizing "prompt engineering" to harness these capabilities might prefer a perspective centered exclusively on p.e. I personally favor the present approach, and your skillful adjustments to resolve problems stemming from attempts to rephrase the article solely from the p.e. viewpoint are greatly appreciated. P. S. The topic of dual perspectives requires focused consideration in the article's introduction. While it was previously addressed in the opening, I am contemplating its potential reintegration: "Prompt engineering and in-context learning are terms used to describe a teaching approach in the context of interactions with a large language model. These terms are employed from different perspectives. Firstly, prompt engineering is used from a user's standpoint, emphasizing crafting series of prompts to achieve the best outcomes or guide the model's behavior within the scope of a single conversation. On the other hand, in-context learning is used from a machine learning perspective. It refers to the model's ability to learn and adapt from the very same series of prompts provided by the user during the conversation. This adaptability allows the model to improve its responses based on the current context. The temporary nature prevents carrying the unwanted old contexts or biases, except the ones already present in the (pre)training dataset, from one conversation to the other." DancingPhilosopher  ( talk ) 07:48, 16 August 2023 (UTC)
 * I agree with this dual-perspectives viewpoint. This article is about two sides of the same concept:
 * Prompt engineering is the practice of crafting text prompts to guide or control an AI. It's a skill that anyone can learn to do, like drawing or juggling.
 * In-context learning is a special ability some AI systems have, to learn and adapt from text prompts. It's a technical term used in academic machine learning, like domain adaptation or inductive bias.
 * A human performs prompt engineering when they write a prompt. An AI performs in-context learning when it reads the prompt, understands it, and correctly responds.
 * If you are an office worker and you want to know how to prompt GPT-4 to do your work for you, then you want an article about prompt engineering with practical information and lots of examples.
 * If you are a student and you want to know why GPT-4 understands what you say, then you want an article about in-context learning that talks about mesa-optimization, meta-learning, and the temporary/permanent distinction between fine-tuning weights and in-context learning activations.
 * If, in the future, AI becomes more widespread and this article grows, perhaps in-context learning will become its own article focused on theory, leaving prompt engineering focused on practical matters. An analogy might be driving vs internal combustion engine, or aviation vs aeronautics.
 * I propose adding a new section ==In-context learning== containing an expanded discussion of the machine learning perspective, while the other sections continue to focus on the user perspective. Lwneal (talk) 19:31, 16 August 2023 (UTC)

Pronunciation RAG
How is RAG to be pronounced? "rag" or "r-a-g"? WiseWoman (talk) 14:49, 22 June 2024 (UTC)

Retrieval-augmented generation
The section is confusing and I believe it is wrong. This is a better description: https://aws.amazon.com/what-is/retrieval-augmented-generation/ — Preceding unsigned comment added by CaliViking (talk • contribs) 14:49, 9 July 2024 (UTC)


 * As written it seems accurate to me. What did you find that was inaccurate? Michaelmalak (talk) 19:15, 9 July 2024 (UTC)
 * The article seems to dive into the technical vector implementation before positioning the overall concept:
 * "This relevance is typically determined by first encoding both the query and the documents into vectors, then identifying documents whose vectors are closest in Euclidean distance to the query vector."
 * The AWS description seems to be more logical and easier to follow. The technical descriptions are in the context of the broader explanation. Saying that it is "wrong" may be a too strong statement, I would rather say that it is "confusing". Do you agree?
 * CaliViking (talk) CaliViking (talk) 18:11, 10 July 2024 (UTC)