Meta Research Introduces System 2 Attention (S2A): An AI Technique that Enables an LLM to Decide on the Important Parts of the Input Context in Order to Generate Good Responses
Large Language Models (LLMs), although highly competent in a wide array of language tasks, often display weak reasoning capabilities by making very simple mistakes. They can sometimes make erroneous judgments because of irrelevant context or exhibit an issue called sycophancy, where the model agrees with the input text even though it’s incorrect. Researchers have tried to tackle these issues by increasing the supervised training data or through reinforcement learning strategies. However, a more effective solution would be to fix the underlying bottlenecks in the transformer’s architecture, particularly the attention mechanism.
Soft attention in a transformer tends to assign importance to large portions of the input text, including irrelevant chunks. Moreover, because of how it is trained, it focuses too much on repeated tokens, leading to the above-mentioned issues. A team of researchers from Meta have introduced a new approach called System 2 Attention (S2A) that leverages an instruction-tuned LLM to identify and extract the most relevant parts of the input context, thereby mitigating the influence of unnecessary information. Another advantage of this method is that controlling the attention focus of the LLM becomes possible, similar to the way we humans handle our attention.
The attention mechanism in a transformer enables it to identify correlations in the text. Although this enhances the next-word prediction capabilities of the model, it also makes the same more prone to being misled by spurious correlations in the context. The probability of repeated words in the text increases with each iteration, creating a positive feedback loop that leads to the model fixating on specific topics. The way S2A works is it first removes the unnecessary parts from the context and regenerates the same, which is then used instead of the original text to output the final result.
The researchers conducted various experiments to test the performance of their approach. They found the following results:
- S2A improves the performance of the model with respect to factuality for opinionated questions.
- S2A increases the objectivity in long-form generation, showing that it is not easily persuaded by opinions.
- Additionally, S2A also enhances the model’s performance on math word problems that contain irrelevant sentences.
The researchers also tested different variations of the S2A method (focusing on relevance instead of irrelevance, keeping the original context after removing the unnecessary words, etc). They found that instead of a few experiments, the variants did not perform as well as the original method.
Even though the method can bypass irrelevant information, it can still be influenced by the same. Additionally, it is more computationally expensive as compared to standard LLM regeneration. However, this issue could be resolved using speedup tricks, and the researchers have left it for future work. Overall, S2A is a method that can prevent an LLM from fixating on unimportant parts of the text to increase the model’s capabilities. The technique improved the model’s performance when dealing with opinionated prompts and math problems with irrelevant sentences. There is still room for further improvement, though, and alternate avenues could be explored to increase the reasoning power of LLMs.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.
Credit: Source link
Comments are closed.