This Paper from MBZUAI Introduces 26 Guiding Principles Designed to Streamline the Process of Querying and Prompting Large Language Models

Large Language Models (LLMs) have been in the news throughout the year and for the right reasons. Their unprecedented abilities in processing multimodal information have paved the way for breakthroughs in diverse fields, making them a potent tool for solving numerous problems. In order to get the most out of these models, it is important to ask the right questions, i.e., providing them with optimized prompts, which has led to the emergence of an entirely new field – prompt engineering, which focuses primarily on crafting optimized and task-specific instructions to get better responses.

A team of researchers from Mohamed bin Zayed University of AI (MBZUAI) has introduced 26 guiding principles to improve the quality of prompts for LLMs. They investigated various behaviors in their study, such as integrating the intended audience in the prompt and other aspects of the characteristics of LLMs in an effort to streamline the process of prompting. Their study mentions that LLMs can adapt the training data to suit different types of prompts, hence highlighting the importance of prompt engineering. 

The researchers have formulated various principles to elicit high-quality responses from LLMs. Some of them are as follows:

  • Prompts should be concise and clear. Users should refrain from providing overly verbose prompts as it can confuse the model, leading to irrelevant responses.
  • Contextual relevance should also be kept in mind while writing the prompt. The LLM should be provided with the relevant background and domain of the task by adding keywords and domain-specific terminology.
  • The prompt must align closely with the specific task by using clear language that indicates the nature of the task. Users could phrase the prompt as a question, command, or fill-in-the-blank statement to get the appropriate output format.
  • For sequential tasks, prompts should be structured to guide the model through the process. This can be done by breaking down the task into a series of steps that build upon each other.
  • Lastly, advanced prompts can use programming-like logic, such as the use of conditional statements, logical operators, etc, to guide the model’s reasoning.

The researchers used a manually crafted benchmark called ATLAS for prompt evaluation. The benchmark consists of 20 human-selected questions (with and without the principled prompts) for each principle. The researchers used models like LLaMA-1, LLaMA-2, GPT-3.5, and GPT-4 for comparison. The results show that all the principles improved the performance of the LLMs, with some principles having more impact than others. On average, they observed a 50% improvement across the different LLMs. They also observed that as the size of the model increases, their accuracies also increase by these principles.

In conclusion, the authors of this research paper have crafted 26 guiding principles for writing better prompts to get better responses from LLMs. They have focused on areas like conciseness, context relevance, task alignment, etc., to create a comprehensive guide for better prompting. Although their work has some limitations and may not deal with very complex questions, it still showed promising results and can help researchers working on prompt engineering. 


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord ChannelLinkedIn GroupTwitter, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.


🎯 Meet Meetgeek: your personal AI Meeting Assistant…. Try it now!.


Credit: Source link

Comments are closed.