How does GPT-4’s steerable nature set it apart from the previous Large Language Models (LLMs)?

The release of OpenAI’s new GPT 4 is already receiving a lot of attention. This latest model is a great addition to OpenAI’s efforts and is the latest milestone in improvising Deep Learning. GPT 4 comes with new capabilities due to its multimodal nature. Unlike the previous version, GPT 3.5, which only lets ChatGPT take textual inputs, the latest GPT-4 accepts text as well as images as input. GPT-4, with its transformer architecture, displays human-level performance because of its more reliable and creative nature compared to its predecessors.

When we talk about OpenAI’s GPT 4 model, it has been called more steerable as compared to the previous versions. Recently in a Twitter thread, an AI researcher named Cameron R. Wolfe discussed the concept of steerability in Large Language Models (LLMs), specifically in the case of the latest GPT 4. Steerability basically refers to the ability to control or modify a language model’s behavior. This includes making the LLM adopt different roles, follow particular instructions according to the user, or speak with a certain tone. 

Steerability lets a user change the behavior of an LLM on demand. In his tweet, Cameron also mentioned how the older GPT-3.5 version used by the well-known ChatGPT was not very steerable and had limitations for chat applications. It mostly ignored system messages, and its dialogues mostly constituted a fixed persona or tone. GPT-4, on the contrary, is more reliable and capable of following detailed instructions. 

🚀 Check Out 100’s AI Tools in AI Tools Club

In GPT-4, OpenAI has provided additional controls within the GPT architecture. System messages now let users customize the AI’s style and tasks desirably. A user can conveniently prescribe the AI’s tone, word choice, and style in order to receive a more specific and personalized response. The author has explained that GPT-4 is trained through self-supervised pre-training and RLHF-based fine-tuning. Reinforcement Learning from Human Feedback (RLHF) includes training the language model using feedback from human evaluators, which serves as a reward signal for evaluating the quality of the generated text. 

To make GPT-4 more steerable, safer, and less likely to produce false or deceptive information, OpenAI has hired experts in multiple fields to evaluate the model’s behavior and provide better data for RLHF-based fine-tuning. These experts can help identify and correct errors or biases in the model’s responses, ensuring more accurate and reliable output.

Steerability can be used in many ways, such as using GPT -4’s system message to make certain API calls. A user can command it to write in a different style or tone, or voice by stating prompts like “You are a data expert” and have it explain a data science concept. When set as a “Socratic tutor” and asked how to solve a linear equation, GPT-4 responded by saying, “Let’s start by analyzing the equations.” In conclusion, GPT-4’s steerability provides greater control over an LLM’s behavior, enabling more diverse and effective applications. It can still hallucinate facts and make reasoning errors, but it is still a very significant development in the AI industry. 


Check out the source. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 18k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

🚀 Check Out 100’s AI Tools in AI Tools Club



Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.


🚀 JOIN the fastest ML Subreddit Community


Credit: Source link

Comments are closed.