AI21 Labs Proposes A New Method Called ‘In-Context RALM’ That Can Add Ready-Made External Knowledge Sources To The Existing Language Model

With recent developments in language modeling (LM) research, machine-generated text applications have spread to a number of previously untapped domains. However, a significant issue remains that LM-generated text frequently contains factual errors or inconsistencies. This problem usually arises in any LM generation scenario, but it is particularly problematic when generation is performed in uncommon domains or when it requires up-to-date information that the LM was not trained on.

Retrieval-Augmented Language Modeling (RALM) methods, which display the LM pertinent documents from a grounded corpus during generation, offer a possible solution to this problem. Current RALM strategies concentrate on changing the LM architecture to include external data. However, this approach often makes deployment significantly complex. Working on this problem statement, AI21 Labs, an organization that develops artificial intelligence systems, introduced an alternative strategy called In-Context Retrieval-Augmented Language Modeling (In-Context RALM), which can supplement an existing language model with ready-made external information sources. The necessary files are added as input into the language model, which keeps the underlying LM architecture unaffected. The team published their findings in a research paper titled “In-Context Retrieval-Augmented Language Models.”

In the same publication, AI21 Labs also unveiled Wordtune Spices, an addition to their Wordtune text editor. Wordtune Spices is an artificial intelligence robot that helps authors swiftly generate text and create content, thereby accelerating the pace of the composition of academic papers, theses, and creative documents. Spices’ main principle is based on the In-context RALM technique. Users of Spices have access to 12 prompt alternatives, including explications, definitions, and even jokes. Users can select the prompt that best supports their use case and receive a string of supplemental sentences to bolster their case and provide further details.

The researchers explained that Wordtune Spices differs from other text-generating systems in that it serves as a co-author rather than a replacement for the author’s original work. It does this by offering suggestions that facilitate better sentence completion and enhance the text’s overall quality. Spices offers text suggestions primarily in three different ways. The first method is to provide explanations and justifications to support the texts’ primary facts and arguments. The second is to offer statistical data input and parallels that enhance the content. The last method is creatively using Spices to suggest jokes and well-known quotations that will help make the article more engaging.

Wordtune Spices can be used by anyone who wants to improve the text, including academics, bloggers, book writers, and even professionals in the legal and medical workspace. Spices’ capability to provide source attribution, which enables users to view the suggested source by clicking on a link, is one of its key differentiators. This functionality solves the problem of existing language models as they fail to provide source attribution.

The research also demonstrates that In-context RALM produces outstanding outcomes across five different corpora. The researchers stated that the document retrieval and ranking process could also be tailored to the in-context RALM environment, which will improve performance even more. This demonstrates the reliability and traceability of Wordtune Spices’ suggestions. In-context RALM provides a lot of promise to spread LM grounding more widely, especially when a pretrained LM must be used directly or through API access. The team has high hopes that the materials they have made available will encourage more study of RALMs and promote their adoption.


Check out the Paper and Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our Reddit PageDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.


Credit: Source link

Comments are closed.