Researchers at Allen Institute for AI Built a System Called DREAM-FLUTE to Explore Machine Learning ‘Mental Models’ for Figurative Language

A particular human tendency is a desire to comprehend the complex world around them and to communicate that understanding to others. This is why people often use figurative language to express themselves. Figurative language uses idioms, personification, hyperbole, and metaphors to simplify a confusing topic. These figurative expressions are not to be taken literally.

Cognitive research has shown that people often visualize a situation based on its textual description. Moreover, humans often tend to unconsciously add some extra information in addition to what is already mentioned in the text to help in tasks like recognizing figurative language and question answering. Despite this, figurative language is frequently quite challenging to comprehend since it can be difficult to determine what implicit meanings are being communicated from the surface form alone. Researchers have shown keen interest in working with figurative language and artificial intelligence models in the past few years. 

To make their contributions in this field, the Aristo, Mosaic, and AllenNLP teams at AI2 collaborated to create the figurative language interpretation system DREAM-FLUTE. In order to help AI understand figurative language, the system first tries to create a “mental model” of the scenario stated in the premise. It then uses this model as context to generate an explanation. DREAM-FLUTE was built during a three-day hackathon at AI2 in response to the shared task of Understanding Figurative Language. The system achieved a shared first position, and its foundation was based upon the findings of an earlier study, DREAM, by the same three authors. The DREAM model adds relevant information along important conceptual dimensions influenced by cognitive science and story understanding about each mentioned situation in the input description.

For each input sentence pair, the model completes two tasks. The first challenge involves determining whether the two phrases imply or contradict, and the second involves creating a textual justification explaining why they imply or contradict. The researchers also highlighted how their single-model approach excels at this task. Furthermore, the system’s adaptability allows for customization to meet the needs of various downstream applications and provides room for future advancements for this task.

Incorporating the DREAM consequence scene elaboration resulted in the creation of outstanding explanations. Because of the high-quality generated explanations, DREAM-FLUTE (consequence) achieved the top spot on the official leaderboard metric. The researchers also presented DREAM-FLUTE (ensemble), an ensemble system that utilizes context to achieve improved results.

For a long time, cognitive science has highlighted the importance of using well-defined representations of circumstances to comprehend and perform question-answering tasks. Using background knowledge and common sense, humans can swiftly fill in such implicit information. However, this is not the case even with today’s top AI systems. The DREAM series aimed to bridge this gap between what humans can understand about implicit information and what is possible for AI systems today. The team set out to determine whether language models can execute a variety of language understanding tasks more effectively if they are given more information about situations in the input text loosely based on this concept.

The researchers hope that the DREAM series will serve as a stepping stone toward creating progress by taking AI a step closer to human-level reasoning capabilities. The team also emphasizes that even though DREAM is a significant first step, there is still room for improvement. A good area for future work would be to work on creating more accurate, reliable, and practical “mental models.” In order to help AI systems function more effectively, AI2 invites other researchers to build on their work and enhance the quality of such “mental models.”


Check out the Paper and Reference Article. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.


Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.


Credit: Source link

Comments are closed.