A New Artificial Intelligence Method Called Synthetic Prompting Leverages The Large Language Models LLMs’ Own Knowledge And Generative Power For Improving LLMs’ Reasoning
Large Language Models (LLMs) can complete various tasks without the need for fine-tuning with the help of few-shot demos or samples of the inputs and outputs for a task. Chain-of-thought prompting, which offers intermediate steps for the task’s reasoning, can help LLMs perform even better. However, the demonstration quality significantly impacts the LLMs’ few-shot performance, particularly for reasoning tasks that call for sophisticated and varied reasoning patterns. It is expensive and time-consuming to manually create a wide and varied set of instances for demonstration selection, and relying on a small number of demos could prevent the LLMs from generalizing and adapting to various test inputs.
This study suggests a novel approach called SYNTHETIC PROMPTING, which employs the LLMs’ knowledge and creative power to supplement a small set of demonstrations with self-generated instances. The LLMs are then instructed to reason more effectively using self-generated examples. They specifically direct an LLM to produce more instances by alternating between two methods after providing a small number of seed examples, each of which includes a query and a series of deductive steps: (1) The backward method, which involves the LLM synthesizing a question based on a self-generated reasoning chain and guarantees that the question is understandable and well-defined; and (2) The forward method, wherein the LLM creates a reasoning chain for the synthesized question, which is then refined to make the reasoning chain more accurate and consistent with the question.
They keep doing this until they have enough artificial instances. By grouping the demonstrations and picking the most complicated one (the one with the longest reasoning chain) from each cluster, they present a new selection technique based on in-cluster complexity that attempts to increase the variety and informativeness of the demonstrations. Finally, they ask the LLM to create a chain of reasoning for a test question and utilize it to find the solution by providing it with the chosen demos. They test their approach using a variety of reasoning tasks, such as symbolic, algorithmic, and numerical reasoning.
👉 Read our latest Newsletter: Microsoft’s FLAME for spreadsheets; Dreamix creates and edit video from image and text prompts……
They show, using prior few-shot conditions, that their approach may greatly enhance the LLMs’ performance, outperforming state-of-the-art techniques by up to 15.6% in absolute terms. Their contributions are as follows:
• They offer SYNTHETIC PROMPTING, a unique technique that uses prompting to supplement a small set of demonstrations with instances that the LLM self-synthesizes to improve reasoning in the LLM.
• To choose interesting and informative demonstrations from the enhanced set for inference, they provide an in-cluster complexity-based approach.
• Using three reasoning problems, they show how their approach works and how it significantly outperforms earlier methods.
No publicly available code implementation exists as of now.
Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 13k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.
Credit: Source link
Comments are closed.