This Artificial Intelligence Research Proposes a New Method That Directly Generates Contextual Docs for a Question Instead of Retrieving External Docs

Large language models have revolutionized the way humans interact with a machine. These AI-powered systems developed to produce text based on massive data are becoming popular daily. With their ability to write, translate, summarize, and even answer questions like humans, these models are taking the world by storm. One of the most distinguished LLMs is developed by OpenAI and is called GPT-3. It generates high-quality text almost indistinguishable from that written by a human. It characterizes a major step forward in the expansion of AI, as it has the potential to transform the way machines operate.

The working of a traditional Large Language model includes the retrieval of documents followed by the reading phase. The first step in the pipeline is retrieval, where the model finds the most relevant detail based on a query. For example, a user may ask the model to search for information on a particular topic, and the model, based on its understanding of language, searches its massive database to output the most relevant information. Following this, the model comprehends and extracts key information from the retrieved text in the reading phase, and this information is used to answer the user’s questions. This old way of formulating answers has been improved by the latest approach called GENREAD, which follows a generate-then-read process.

This new approach solves knowledge-based tasks by replacing document retrievers with large language model generators. GENREAD shows several advantages over the traditional method by first prompting an LLM model to produce question-based contextual documents and then reading the produced documents to generate the final output. GENREAD’s capability to perform well without any external knowledge source shows that the method generates detailed replies without retrieving documents, making it an extremely efficient and flexible solution.


👉 Read our latest Newsletter: Google AI Open-Sources Flan-T5; Can You Label Less by Using Out-of-Domain Data?; Reddit users Jailbroke ChatGPT; Salesforce AI Research Introduces BLIP-2….

Various tests have shown GENREAD’s ability to outperform the retrieve-then-read pipeline. The method was tried on several knowledge-intensive Natural Language Processing tasks such as open-domain question answering (QA), TriviaQA, WebQ, fact-checking, FM2, and open-domain dialogue systems. Upon evaluating the performance using the exact match (EM) score, the results displayed that GENREAD achieved 71.6 and 54.4 exact match scores on TriviaQA and WebQ, respectively, beating the state-of-the-art retrieve-then-read pipeline by a substantial margin.

The team behind GENREAD shows another advantage of this approach: the model’s performance can be enhanced by the combination of both retrieval and generation. This blend allows accuracy and efficiency of retrieval and the flexibility and diversity of generation, making the solution even more noteworthy. The implementation of GENREAD can be accessed here.

In conclusion, this new study offers a novel approach to resolving knowledge-intensive tasks using large language model generators in place of document retrievers. The results show that this approach can exponentially improve the performance of current solutions. The advantages of GENREAD make it an undoubtedly promising solution for the future.


Check out the Paper and Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 13k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.


Credit: Source link

Comments are closed.