This AI Research Paper Proposes a Policy Framework for Auditing Large Language Models LLMs by Breaking Down Responsibilities at the Governance, Model and Application Level

Technology companies and law enforcement agencies may discover and reduce hazards related to artificial intelligence (AI) systems by using auditing as a governance tool. In particular, auditing is a systematic, impartial process of gathering and analyzing data about an entity’s operations or assets, then reporting the findings to appropriate parties.

Three ideas support the promise of auditing as an AI governance mechanism:

  1. Procedural regularity and transparency contribute to good governance.
  2. Proactive AI system design helps identify risks and prevent harm before it occurs.
  3. The operational independence between the auditor and the auditee contributes to the objectivity and professionalism of the evaluation. Prior research on AI auditing has concentrated on ensuring certain applications comply with predetermined, frequently sector-specific standards.

For instance, researchers have created protocols for auditing AI systems used in internet searches, medical diagnosis, and recruiting.

🚨 Read Our Latest AI Newsletter🚨

However, AI systems’ capabilities tend to expand in scope with time. The phrase “foundation models” was recently created by Bommasani et al. to refer to models that may be transferred to various downstream tasks via transfer learning. Technically speaking, foundation models could be more novel1. Still, they vary from other AI systems because they are efficient across various tasks and exhibit emergent capabilities when scaled. While foundation models are often trained and published by one actor and then modified for multiple applications by other actors, the growth of foundation models also signals a change in how AI systems are created and deployed. Foundation models present serious difficulties from an AI auditing standpoint.

For instance, it might be challenging to evaluate the hazards that AI systems provide in isolation from the environment in which they are used. This paper concentrates on a subset of foundation models, specifically big language models (LLMs), to fill that gap. Furthermore, it also needs to be clarified how to divide the blame for damages between technology suppliers and downstream developers. When seen collectively, the foundation models’ capabilities and training methods have advanced faster than the methods and instruments used to assess their moral, legal, and technological soundness. This suggests that additional kinds of oversight and control must be added to application-level audits which are crucial to AI governance.

Language models produce the most probable word, code, or other data sequences by starting with a source input known as the prompt. Natural language processing (NLP) has historically employed a variety of model designs, including probabilistic techniques. Nevertheless, most current LLMs, including the ones this article focuses on, are built using deep neural networks trained on a sizable corpus of texts. These LLMs include GPT-3, PaLM, LaMDA, Gopher, and OPT. After undergoing pretraining, an LLM can be modified (with or without fine-tuning) to serve a variety of applications, including spell-checking and creative writing. For two reasons, creating LLM auditing processes is an important effort.

Being able to audit LLMs’ features along several normative dimensions (such as privacy, bias, IP, etc.) is a crucial undertaking in and of itself due to the urgency of resolving such concerns. Prior research has shown that LLMs presents several ethical and social challenges, such as the perpetuation of negative stereotypes, the leakage of personally identifiable information protected by privacy laws, the spread of false information, plagiarism, and the unauthorized use of the intellectual property. CLIP a vision-language model was trained to foretell which text captioned a picture.

Figure 1: The proposed 3 layered approach

CLIP is not an LLM but may be customized for several downstream applications, and other models confront similar governance issues. The same is true for other robotic content creators like DALLE2. In the future, auditing of other basic models and even more potent generating systems may benefit from improving LLM auditing processes. Three innovative contributions are made in this essay. They begin by asserting six points on how auditing practices ought to be developed to account for the risks presented by LLMs. These assertions are based on a review of the capabilities and limitations of current AI auditing practices. Second, they offer a framework for auditing LLMs based on the most effective procedures for IT governance and system engineering. In particular, they suggest a three-layered strategy in which governance audits (of technology providers that design and distribute LLMs), model audits (of LLMs after pretraining but before their release), and application audits (of applications based on LLMs) are complementary and informing of one another. (see Figure 1 above) Third, they address the shortcomings of their three-layered strategy and outline possible directions for further investigation. 

Their work is connected to a larger research agenda and process for formulating policy. Organizations like DeepMind, Microsoft, and Anthropic have published research mapping the risks of harm posed by LLMs and highlighting the need for new governance mechanisms to address the related ethical challenge. AI labs like Cohere, OpenAI, and AI21 have expressed interest in understanding what it means to develop LLMs responsibly. Governments are also concerned with ensuring society gains from using LLMs while limiting hazards.


Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 14k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.


Credit: Source link

Comments are closed.