Language is an essential human component because of its role in demonstrating and promoting comprehension – or intellect. It allows people to express ideas, create memories, and foster mutual understanding by allowing them to share their thoughts and concepts.
The research and study of more sophisticated language models — systems that predict and generate text – has enormous potential for developing advanced AI systems. This includes systems that can securely and efficiently summarise information, provide expert advice, and follow directions using natural language. Research on the possible impacts of language models and the risks they entail are required before they can be developed. This includes working together with experts from many fields to foresee and fix the problems that training algorithms on current datasets can cause.
A recent DeepMind study reflects this interdisciplinary approach, which includes the following research:
A detailed examination of the Gopher language model
The researchers trained a series of transformer language models of various sizes, spanning from 44 million to 280 billion parameters, in order to study and build novel language models. They named the largest model Gopher.
Their research looked at the merits and disadvantages of those various-sized models, highlighting areas where raising the scale of a model continues to improve performance. This includes reading comprehension, fact-checking, and identifying toxic language, to name a few. They also highlight studies where model scaling has no noticeable effect on performance, such as logical reasoning and common-sense tasks.
The team discovered that Gopher’s capabilities outperform existing language models for several critical jobs. According to them, Gopher significantly improved over previous work in terms of human expert performance in the Massive Multitask Language Understanding (MMLU) test.
They investigated Gopher through direct engagement in addition to quantitative evaluation. When Gopher is urged to engage in a dialogue interaction (such as in a chat), one significant result is that the model can occasionally yield surprising coherence.
Despite the lack of specific dialogue fine-tuning, Gopher can discuss cell biology and provide a correct reference. However, the findings revealed a number of failure mechanisms that are consistent across model sizes. This includes a proneness for repetition, reflecting stereotyped biases, and the confident spread of wrong information.
A study of the ethical and societal risks connected with big language models
Their second paper discusses the potential ethical and social hazards posed by language models. In addition, they develop a detailed taxonomy of these risks and failure mechanisms. This comprehensive summary is critical to comprehending the risks and preventing potential harm.
They introduce a taxonomy of language model risks organized into six subject groups and in-depth analyses of 21 of them. It’s critical to take a wide picture of different risk areas, as focusing too narrowly on a single risk in isolation might exacerbate other issues. The proposed taxonomy serves as a framework for professionals and the general public to develop a shared understanding of ethical and social issues on language models, make responsible judgments, and share techniques to deal with the hazards highlighted.
According to the findings, two areas in particular demand more attention.
- Present benchmarking tools are insufficient for assessing some significant hazards. For example, when language models produce false information that people believe to be real. Assessing such dangers necessitates a closer examination of human-computer interaction with language models. In this study, the team discusses a number of hazards that need a novel or more interdisciplinary analysis methodologies.
- More efforts on risk mitigation are required. Language models, for example, are known to reproduce detrimental societal prejudices.
Research studying a novel architecture with improved training efficiency
This study builds on Gopher’s foundations and the proposed ethical and social risk taxonomy by providing an enhanced language model architecture. This model decreases training energy costs and makes it easier to trace model outputs to origins within the training corpus.
The team used the Internet-scale retrieval method to pre-train the Retrieval-Enhanced Transformer (RETRO). RETRO effectively queries for text passages to improve its predictions. The researchers state that the model achieves state-of-the-art performance on various language modeling benchmarks, despite having an order of magnitude fewer parameters than a normal Transformer.
Paper 1: https://storage.googleapis.com/deepmind-media/research/language-research/Training%20Gopher.pdf
Paper 2: https://arxiv.org/abs/2112.04359
Paper 3: https://arxiv.org/abs/2112.04426
Reference: https://deepmind.com/blog/article/language-modelling-at-scale
Suggested
Credit: Source link
Comments are closed.