Natural language processing (NLP) paradigms are fast developing, progressing from entirely supervised learning through pre-training and fine-tuning and, most recently, pre-training with immediate prediction. Many AI researchers are looking into new developments in this field because NLP systems are improving and can be used in the real world.
In the new publication, reStructured Pre-training, a Carnegie Mellon University research group offers reStructured Pre-training (RST), a revolutionary NLP paradigm that pre-trains models on useful restructured data. The paper begins with a remark emphasizing the authors’ contention that pre-training on restructured data would be more successful than pre-training on raw data alone.
Let’s have a look at the primary contributions of this study, as summarized by the group:
- This study seeks to build a “Hypothesis of NLP Technique Evolution” from a global viewpoint by investigating the internal relationship between the development of present NLP technology.
- The group proposed reStructured Pre-training as a new paradigm for modeling NLP. According to this paradigm, model pre-training and tuning is a process of storing and accessing data, and an excellent way to store data should make data that is expected easy to find.
- The QIN, the first deep learning-based AI system for the Gaokao-English test, has been developed.
- The group of researchers launched the Gaokao Benchmark to measure their progress toward human-level intelligence, and they built an interactive scoreboard utilizing ExplainaBoard as a Gaokao Benchmark.
- The effectiveness of AI in English for the Gaokao Assessment has given them a fresh view that AI technology may empower education and help address a variety of educational and instructional challenges. The outstanding results on more than 50 datasets from different NLP tasks show how valuable data-centric pre-training is and encourage more research.
In contrast to existing NLP paradigms, which focus on model architecture or structure, the suggested RST tries to maximize the value of given data by covering as many types of signals as possible and giving special access to these signals based on what downstream activities need.
The RST approach is divided into three phases: restructure, pretrain, and fine-tune. Current data signals in various forms are first reorganized into a popular method for model pre-training before selecting and training on this structured data. In the end, the model is fine-tuned with restructured, labeled data for greater performance.
The study also reveals the first deep learning-based AI system explicitly designed for China’s Gaokao English College Entrance Examination, as believed by the researchers.
The researchers tested the suggested RST on a range of NLP tasks, and it beat baseline models such as GPT-3 and T0pp in 52 of the 55 datasets tested. The QIN AI system also outperformed the typical student on the Gaokao test, achieving 40 points higher and 15 points better than the GPT-3 with 1/16 of the parameters.
This paper contends that blindly adhering to supervised or unsupervised, pre-training or fine-tuning, few-shot, or zero-shot makes little sense in NLP. In actuality, all that counts is how one makes the most use of the knowledge derived from data available from the outside world.
This Article is written as a summary article by Marktechpost Staff based on the paper 'reStructured Pre-training'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper, github and reference article. Please Don't Forget To Join Our ML Subreddit
Credit: Source link
Comments are closed.