Cohere, OpenAI, and AI21 Labs Introduce a Set of Key Principles to Help Providers of Large Language Models (LLMs) Mitigate the Risks of AI

This Article is written as a summay by Marktechpost Staff based on the paper 'Joint Recommendation for Language Model Deployment'. All Credit For This Research Goes To The Researchers of This Project. Check out the paper and blog post.

Please Don't Forget To Join Our ML Subreddit

In recent years, Large language models, like OpenAI’s GPT-3, have been widely used in various tasks such as generating human-like text and code, writing emails and articles automatically, fixing software faults, and more. The most common method for constructing these models is to use huge computational resources. However, training and deploying big language models can be expensive in terms of technology. Further, many companies and institutions are unable to use these models because of their demanding nature.  

A new collaboration between Cohere, OpenAI, and AI21 Labs presents a preliminary set of best practices that may be used by any enterprise that is building or deploying big language models. Avoiding malicious usage of language models is the first step in cooperatively guiding safer big language model development and deployment and addressing the global concerns posed by AI advancement:

  1. Misuse should be prohibited
  • LLM usage standards and conditions of use should be published in a way that prevents individuals, communities, and society from suffering material harm, such as spam, fraud, or astroturfing. In addition, usage rules should identify domains where LLM use requires extra consideration and ban high-risk use-cases that aren’t suitable, such as classifying people based on protected traits.
  • Methods and infrastructure should be developed to ensure that usage rules are followed. This may include rate limits, content filtering, anomalous activity monitoring, and other mitigations.

2.  Reducing the risk of accidental harm

  • Dangerous model behavior should be mitigated ahead of time. Comprehensive model evaluation to accurately assess constraints, limiting potential sources of bias in training corpora, and approaches to reduce risky behavior, such as learning from human feedback, are all examples of best practices. Comprehensive model evaluation to accurately assess constraints, limiting potential sources of bias in training corpora, and approaches to reduce risky behavior, such as learning from human feedback, are all examples of best practices.
  • Model and use-case-specific safety best practices should also be documented. Documenting flaws and vulnerabilities, such as bias, can prevent the risk of unintentional harm in some circumstances.

3. Collaborating with stakeholders in a thoughtful manner

  • Teams with different backgrounds and soliciting feedback from a wide range of people should be built. This adds a broader perspective which helps to characterize and address how language models will perform in the variety of the actual world.
  • Also, lessons learned about LLM safety, and abuse should be made public to encourage widespread adoption and cross-industry iteration of best practices.
  • All workers in the language model supply chain should be respected. For example, providers should have high criteria for examining model outputs in-house, and vendors should adhere to well-defined standards (e.g., ensuring labelers can opt out of a given task).

The ideas mentioned above were formed based on the team’s experience providing LLMs via an API. Regardless of the release technique, the team hopes that their initiative will be valuable to the community. They plan to continue actively learning about LLM limitations and misuse opportunities and updating these principles and practices over time in partnership with the broader community.

Credit: Source link

Comments are closed.