AI2 Open-Sources ‘LM-Debugger’: An Interactive Tool For Inspection And Intervention In Transformer-Based Language Models

This Article Is Based On The Research Paper 'LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models'. All Credit For This Research Goes To The Researchers Of This Paper đź‘Źđź‘Źđź‘Ź

✍ Submit AI Related News/Story/PR Here

Please Don't Forget To Join Our ML Subreddit

In natural language processing, a language model is a probabilistic statistical model that calculates the likelihood of a specific sequence of words appearing in a phrase based on the preceding words. As a result, it’s common in predictive text input systems, speech recognition, machine translation, and spelling correction, among other applications. They are a method of converting qualitative text information into quantitative data that machines can interpret.

Modern NLP models rely on transformer-based language models (LMs). However, a lot more research is to be done under their fundamental prediction development process. Unclear prediction behavior becomes an obstacle for both end-users who don’t comprehend why a model generates certain predictions and developers who want to diagnose or fix model behavior.

A new paper published by a group of researchers from Allen Institute for AI, Tel Aviv University, Bar-Ilan University, and the Hebrew University of Jerusalem introduces LM-Debugger, an interactive open-source tool for fine-grained interpretation and intervention in LM predictions. This work will increase the transparency of LMs. 

The concept of LM-Debugger was inspired by recent findings of Geva. et al. 2022 that offer three basic capabilities for single-prediction debugging and model analysis.  

LM-Debugger employs FFN layers to evaluate the model’s prediction across the network and the key changes applied to it for a particular input. To accomplish this, token representations are used, which are projected to the output vocabulary before and after the FFN update and the significant FFN updates at any layer.

Source: https://arxiv.org/pdf/2204.12130.pdf

It facilitates users to intervene in the prediction process by altering the weights of certain FFN updates, such as raising (decreasing) the weight of an update that promotes music-related (teaching-related) notions, resulting in a different output.

Initially, LM-Debugger reads all FFN parameter vectors across the network. It then produces a search index over the tokens they promote, in addition to debugging single predictions. This permits users to examine input-independent concepts contained by the model’s FFN layers, as well as the configuration of broad and effective interventions.

The researchers mention that other auto-regressive models can be plugged-in with simply a few local modifications in the current implementation of LM-Debugger. The current implementation supports any GPT2 model from HuggingFace (e.g., translating the relevant layer names). 

In their paper ” LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models,” the team demonstrates LM-Debugger’s usefulness in two different scenarios. They have used the model’s fine-grained tracing capabilities in the context to analyze the model’s internal disambiguation process, finding bottlenecks in the prediction process. They also show how their tool can be used to set up a few robust interventions for controlling various aspects of text generation.

Paper: https://arxiv.org/abs/2204.12130

Github: https://github.com/mega002/lm-debugger

Source: https://blog.allenai.org/introducing-lm-debugger-d34e94444dc2

GPT2 Medium: https://lm-debugger.apps.allenai.org/

GPT2 Large: https://lm-debugger-l.apps.allenai.org/

Credit: Source link

Comments are closed.