OpenAI Releases A New Feature That Allows Developers To Customize GPT-3, Its Powerful Natural Language Processing (NLP) Model

Source: https://openai.com/blog/customized-gpt3/

GPT-3 is the advanced natural language processing model developed by OpenAI. It returns a natural language text completion in response to any text request, such as a phrase or a sentence. Developers use GPT-3 (through on-demand charging via application programming (API)) in their applications to do tasks such as text translation and software code development. 

OpenAI has recently released new functionality that will allow developers to create their own versions of GPT-3. The new customization option is now available in the API.

GPT-3 can execute a wide range of natural language tasks with just a few instances, a notion known as few-shot learning or prompt design. GPT-3 can be customized to produce much better results because it allows users to provide far more instances than prompt design allows.

The adaptability of GPT-3 is due to its unique architecture. The AI is based on a Transformer design, a popular technique for creating natural-language processing models. GPT-3, on the other hand, is more advanced than the usual Transformer-based neural network. It has 175 billion parameters, far more than many different neural networks in the same category. These are the settings that control how an AI interprets input.

GPT-3’s capacity to do a wide range of jobs is also due to how the team has trained the AI. They trained GPT-3 on hundreds of billions of data points gathered from various sources in the lab. As a result, GPT-3 can learn to accomplish a new task from only a few instances in some cases.

Developers don’t need to train GPT-3 on a huge number of examples before using it in their applications. However, it has been noticed that increasing the amount of training data utilized in a project can boost processing accuracy dramatically. OpenAI’s new AI customization capability can achieve this goal.

Developers can use the new feature to train GPT-3 on a particular dataset to optimize it for a specific use case. It also allows developers to train GPT-3 on new information as it becomes available. For example, a company that uses GPT-3 to assess customer feedback might retrain the AI every week based on the most recent user ideas.

The team showed that fine-tuning with fewer than 100 samples can increase GPT-3’s performance on some tasks. They mention that doubling the number of samples improves quality linearly.

Viable Inc., a venture-backed firm, is employing GPT-3 to power a service that assists businesses in analyzing consumer feedback and identifying the most often repeating suggestions. Viable increased the accuracy of feedback summaries by 24% with the new personalization feature, according to OpenAI.

Training GPT-3 on custom datasets has the added benefit of potentially lowering AI expenses. Applications must send GPT-3 a text prompt stating what they should do for it to accomplish a task. The number of words in application prompts determines how much OpenAI charges for GPT-3 usage. Customizing GPT-3 with a specialized training dataset for a given use case may allow the AI to do the task with simpler prompts with fewer words, lowering API costs.

The latest OpenAI version exemplifies the industry’s rising interest in AutoML technology. Using specific training datasets to customize neural networks can bring significant reliability gains.

The AI is available in multiple distinct variants, each with different performance, accuracy, and cost. OpenAI also collaborated with Microsoft to create Codex, a special edition of GPT-3 with code completion features for developers.

Get Access: https://beta.openai.com/docs/guides/fine-tuning/preparing-your-dataset

References:

Credit: Source link

Comments are closed.