Vicuna LLM

From Wikipedia, the free encyclopedia

Vicuna LLM is an omnibus Large Language Model used in AI research.[1] Its methodology is to enable the public at large to contrast and compare the accuracy of LLMs "in the wild" (an example of citizen science) and to vote on their output; a question-and-answer chat format is used. At the beginning of each round two LLM chatbots from a diverse pool of nine are presented randomly and anonymously, their identities only being revealed upon voting on their answers. The user has the option of either replaying ("regenerating") a round, or beginning an entirely fresh one with new LLMs.[2] (The user also has the option of choosing which LLMs to do battle.) Based on Llama 2,[3][4] it is an open source project,[5][6] and it itself has become the subject of academic research in the burgeoning field.[7][8]

References[edit]

  1. ^ "Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | LMSYS Org". lmsys.org.
  2. ^ "Vicuna LLM Commercially Available, New v1.5 Update Improves Context Length".
  3. ^ "lmsys/vicuna-13b-v1.5 · Hugging Face". huggingface.co.
  4. ^ "The LLM Index: Vicuna | Sapling". sapling.ai.
  5. ^ "FastChat". October 29, 2023 – via GitHub.
  6. ^ "How to Train and Deploy Vicuna and FastChat LLMs | Width.ai". www.width.ai.
  7. ^ https://arxiv.org/pdf/2304.03277.pdf
  8. ^ https://arxiv.org/abs/2306.05685

External links[edit]