Draft:Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (RAG) is a neural network method for enhancing language models  with information unseen during training so it can perform tasks such as answering questions using that added information. In its original 2020 form, the weights in a neural network of a RAG system do not change. In contrast, neuronal weights do change for other Language Model enhancement methods like Fine Tuning. In RAG, a body of new information is vectorized, and select portions are retrieved when the network needs to generate a response.

The problems that RAG addresses are information staleness and factual accuracy (some times called grounding or hallucinations).

Techniques
Improvements to the Response can be applied at different stages in the RAG flow.

Encoder
These methods center around the encoding of text as either dense or sparse vectors. Sparse vectors, which encode the identity of a word, are typically dictionary length and contain almost all zero's.  Dense vectors, which aims to encode meaning, are much smaller contain much fewer zero's.   several enhancements can be made in the way similarities are calculated in the vector stores (databases). Performance can be improved with faster dot products, approximate nearest neighors, or centroid searches. Accuracy can be improved with Late Interactions   hybrid vectors. combine dense vector representations with sparse 1-hot vectors, so that the faster sparse dot products can be used rather than dense ones. Other methods can combine sparse methods (BM25, SPLADE) with dense ones like DRAGON. 

Retriever-centric methods
  pre-train the retriever using the Inverse Cloze Task.   progressive data augmentation. The method of Dragon samples difficult negatives to train a dense vector retriever.   Under supervision, train the retriever for a given generator. Given a prompt and the desired answer, retrieve the top-k vectors, and feed those vectors into the generator to achieve a perplexity score for the correct answer. Then minimize the KL-divergence between the observed retrieved vectors probability and LM likelihoods to adjust the retriever.   use reranking to train the retriever.  

Language model
By redesigning the language model with the retriever in mind, a 25-times smaller network can get comparable perplexity as its much larger counterparts. Because it is trained from scratch, this method (Retro) incurs the heavy cost of training runs that the original RAG scheme avoided. The hypothesis is that by giving domain knowledge during training, Retro needs less focus on domain and can devote its smaller weight resources only on language semantics. The redesigned language model is shown here.

It has been reported that Retro is not reproducible, so modifications were made to make it so. The more reproducible version is called Retro++ and includes in-context RAG.

Chunking
Converting domain data into vectors should be done thoughtfully. It is naive to convert an entire document into a single vector and expect the retriever to find details in that document in response to a query. There are various strategies on how to break up the data. This is called Chunking.

<li> Fixed length with overlap. This is fast and easy. Overlapping consecutive chunks help to maintain semantic context across chunks. <li> Syntax based chunks can break document up by sentences. Libraries such as spaCy or NLTK can also help. <li> File format based chunking. Certain file types have natural chunks built in and it's best to respect them. For example, code files are best chunked and vectorized as whole functions or classes. HTML files should leave or base64 encoded elements intact. Similar considerations should be taken for pdf files. Libraries such as Unstructured or Langchain can assist with this method.