This AI Research from UCLA Indicates Large Language Models (such as GPT-3) have Acquired an Emergent Ability to Find Zero-Shot Solutions to a Broad Range of Analogy Problems

Analogical reasoning serves as the cornerstone of human intelligence and ingenuity. When faced with an unfamiliar challenge, individuals frequently devise viable solutions by methodically comparing them to a more recognizable scenario. This approach plays a key role in how humans think across various activities, from solving everyday issues to fostering creative concepts and pushing the boundaries of scientific discovery.

With the advancement in Deep Learning and Large Language Models (LLMs), LLMs are extensively tested and studied for analogical reasoning. Advanced language models possess the capacity for independent reason and abstract pattern recognition, serving as human intelligence’s foundational principle.

A study conducted by a UCLA research team has cast light on the true capabilities of LLMs. This research has gained notable recognition for its impactful discoveries. These findings have been featured in the latest edition of Nature Human Behavior, highlighted in an article titled “Emergent Analogical Reasoning in Advanced Language Models.” This study has shown that large language models (LLMs) can think like people and not imitate our thinking based on statistics.

The study involved a head-to-head assessment between human reasoners and a robust language model (text-davinci-003, a version of GPT-3) across various analogical assignments.

The researchers examined the language model GPT-3 through various analogy tasks without prior training and conducted a direct comparison with human responses. These tasks involved a distinct text-based matrix reasoning challenge, drawing inspiration from the rule structure of Raven’s Standard Progressive Matrices (SPM). Furthermore, they also carried out a visual analogy task.

The starting point for the model was a base version trained on a massive web-based collection of real-world language data totaling over 400 billion tokens. This training process was guided by a next-token prediction goal, where the model learned to predict the most probable next token in a given sequence of text.

This assessment encompassed four distinct task categories, each strategically crafted to explore various facets of analogical reasoning:

  1. Text-based matrix reasoning challenges
  2. Letter-string analogies
  3. Four-term verbal analogies
  4. Story analogies

Across these domains, they directly compared how the model performed with how humans did, looking into overall effectiveness and patterns of errors across a range of conditions similar to how humans approach analogical reasoning.

GPT-3 really impressed with its ability to grasp abstract patterns, often performing as well as or even better than humans in various scenarios. Early trials of GPT-4 seem to show even more promising outcomes. From what has been seen, big language models like GPT-3 have this knack for spontaneously cracking a wide array of analogy puzzles.

Moreover, they discovered that text-davinci-003 shone when it came to analogy tasks. Interestingly, earlier model versions also held their own in certain task scenarios, hinting at a blend of factors that enhanced text-davinci-003’s knack for analogical reasoning.

GPT-3 showed some impressive skills in handling letter string analogies, four-term verbal analogies, and spotting analogies within stories without prior training. These findings contribute to the expanding knowledge about what these advanced language models can do, hinting that the more advanced ones already have this built-in ability to reason through analogy.


Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 28k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Rachit Ranjan is a consulting intern at MarktechPost . He is currently pursuing his B.Tech from Indian Institute of Technology(IIT) Patna . He is actively shaping his career in the field of Artificial Intelligence and Data Science and is passionate and dedicated for exploring these fields.


🔥 Use SQL to predict the future (Sponsored)

Credit: Source link

Comments are closed.