Researchers Develop ‘TiCoder’ Framework For Code Generation Using User Feedback With 90.4% Consistency To User Intent

One of the key drivers of the recent success of powerful pretrained large language models (LLMs) in natural language processing is the model’s capacity to automatically generate code based on informal natural language prompts. However, because natural human language can frequently be ambiguous, LLMs have trouble writing code that accurately captures user intent. Taking a step in this direction, a research team from Microsoft Research, the University of Pennsylvania, and the University of California, San Diego proposed TiCoder (Test-driven Interactive Coder) in their most recent publication, “Interactive Code Generation through Test-Driven User-Intent Formalization.” This process for test-driven user-intent formalization (TDUIF) uses user feedback to produce code that is 90.40 percent consistent with user intent from natural language inputs.

It is even challenging to evaluate the accuracy of LLM-generated code because of the difficulty in determining the precise intent from natural language inputs. Moreover, it is even more demanding to comprehend and assess code suggestions without running or debugging code suggestions. These elements may influence users to accept flawed code or reject correct code that is too complex to comprehend. The TDUIF-based methodology developed by the team is intended to address these problems by utilizing feedback to produce codes that align with user intent as represented in natural language inputs. The suggested framework provides generated tests to clarify and codify user intent and generate equivalent code based on these tests. 

As the initial step in the high-level workflow, the human user is required to ask the agent to finish a function body with the help of a prefix in a file, a natural language description, and a function header comprising the method name, and parameters. Until a stopping criterion is achieved, the agent keeps asking the user if a series of behaviors are consistent with their goal. Each agent’s questions receive a YES, NO, or DONTKNOW response from the user. The agent returns a series of tests the user has authorized at the end of the interaction, along with a ranked list of code suggestions in line with their input.

The researchers used the academic code generation benchmark dataset Mostly Basic Python Problems (MBPP) to test their TiCoder TDUIF implementation empirically. With just one user query, TiCoder increased the statistic for code generation accuracy by over 22% by utilizing the OpenAI Codex LLM on MBPP. For 90.40 percent of the MBPP samples, TiCoder also showed that it could provide a non-trivial functional unit test that was in line with user intent in an average of 1.69 user queries. Overall, this research confirms the efficiency of the suggested workflow. The development team thinks their framework is flexible enough to accommodate more detailed formal specifications like procedure summaries and can function as a scalable solution for code generation.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Interactive Code Generation via Test-Driven User-Intent Formalization'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and reference article.

Please Don't Forget To Join Our ML Subreddit


Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.


Credit: Source link

Comments are closed.