Can We Optimize AI for Information Retrieval with Less Compute? This AI Paper Introduces InRanker: a Groundbreaking Approach to Distilling Large Neural Rankers

The practical deployment of multi-billion parameter neural rankers in real-world systems poses a significant challenge in information retrieval (IR). These advanced neural rankers demonstrate high effectiveness but are hampered by their substantial computational requirements for inference, making them impractical for production use. This dilemma poses a critical problem in IR, as it is necessary to balance the benefits of these large models with their operational feasibility. 

Significant research efforts have been made in the field, which include the utilization of synthetic text from PaLM 540B and GPT-3 175B for knowledge transfer to smaller models like T5, multi-step reasoning using FlanT5 and code-DaVinci-002 and distillation of cross-attention scores for click-through-rate prediction, integrating contextual features. Several researchers have worked on distilling the self-attention module of transformers. Advancements have also been made using MarginMSE loss for two distinct purposes: one for distilling knowledge across different architectural designs and another for refining sparse neural models. Pseudo-labels from advanced cross-encoder models like BERT are one of the methods for generating synthetic data for domain adaptation of dense passage retrievers.

Researchers at UNICAMP, NeuralMind, and Zeta Alpha have proposed a method called InRanker for distilling large neural rankers into smaller versions with increased effectiveness on out-of-domain scenarios. The approach involves two distillation phases: (1) training on existing supervised soft teacher labels and (2) training on teacher soft labels for synthetic queries generated using a large language model. 

The first phase uses real-world data from the MS MARCO dataset to familiarize the student model with the ranking task. The second phase utilizes synthetic queries generated by an LLM based on randomly sampled documents from the corpus. It is aimed to improve zero-shot generalization using synthetic data generated from an LLM. The distillation process allows smaller models like monoT5-60M and monoT5-220M to improve their effectiveness by using the teacher’s knowledge despite being significantly smaller.

The research successfully demonstrated that smaller models like monoT5-60M and monoT5-220M, distilled using the InRanker methodology, significantly improved their effectiveness in out-of-domain scenarios. Despite being substantially smaller, these models were able to match and sometimes surpass the performance of their larger counterparts in various test environments. This advancement is particularly beneficial in real-world applications with limited computational resources, providing a more practical and scalable solution for IR tasks.

In conclusion, this research marks a significant advancement in IR, presenting a practical solution to the challenge of using large neural rankers in production environments. The InRanker method effectively distills the knowledge of large models into smaller, more efficient versions without compromising out-of-domain effectiveness. This approach addresses the computational constraints of deploying large models and opens new avenues for scalable and efficient IR. The findings have substantial implications for future research and practical applications in the field of IR.


Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel


Nikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.


🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…


Credit: Source link

Comments are closed.