БЛОГ

Jan 23, 2023

AI21 Labs Proposes A New Method Called ‘In-Context RALM’ That Can Add Ready-Made External Knowledge Sources To The Existing Language Model

Posted by in category: robotics/AI

With recent developments in language modeling (LM) research, machine-generated text applications have spread to a number of previously untapped domains. However, a significant issue remains that LM-generated text frequently contains factual errors or inconsistencies. This problem usually arises in any LM generation scenario, but it is particularly problematic when generation is performed in uncommon domains or when it requires up-to-date information that the LM was not trained on.

Retrieval-Augmented Language Modeling (RALM) methods, which display the LM pertinent documents from a grounded corpus during generation, offer a possible solution to this problem. Current RALM strategies concentrate on changing the LM architecture to include external data. However, this approach often makes deployment significantly complex. Working on this problem statement, AI21 Labs, an organization that develops artificial intelligence systems, introduced an alternative strategy called In-Context Retrieval-Augmented Language Modeling (In-Context RALM), which can supplement an existing language model with ready-made external information sources. The necessary files are added as input into the language model, which keeps the underlying LM architecture unaffected. The team published their findings in a research paper titled “In-Context Retrieval-Augmented Language Models.”

In the same publication, AI21 Labs also unveiled Wordtune Spices, an addition to their Wordtune text editor. Wordtune Spices is an artificial intelligence robot that helps authors swiftly generate text and create content, thereby accelerating the pace of the composition of academic papers, theses, and creative documents. Spices’ main principle is based on the In-context RALM technique. Users of Spices have access to 12 prompt alternatives, including explications, definitions, and even jokes. Users can select the prompt that best supports their use case and receive a string of supplemental sentences to bolster their case and provide further details.

Comments are closed.