Google Researchers Explain The Rephrasing in Google Assistant Based on Context

Context and references drive interaction between people. For example, if asked “Who wrote Romeo and Juliet?” and then asked “Where was he born?”, it is evident that ‘he’ refers to William Shakespeare without saying so. If someone mentions “python,” one can use the context to discern if they mean the snake or the computer language. Suppose a virtual assistant can’t comprehend context and references. In that case, users must adjust by repeating contextual information in follow-up inquiries to ensure the Assistant understands their requests and can deliver relevant answers.

This article demonstrates a Google Assistant solution that allows users to talk naturally when referencing the previous context. The technique rephrases a user’s follow-up query to identify the missing contextual information, which can be replied to as a stand-alone query. While Assistant uses a variety of contexts to understand user input, this piece focuses on recent conversation history.

Rephrasing context

Assistant detects if an input speech refers to a previous context and rephrases it to incorporate missing information. Refers to both the subject (Romeo and Juliet) and the answer to the preceding question (William Shakespeare). Follow-up questions to “Who authored Romeo and Juliet?” include “When?” When?

While there are alternative approaches to handle context, such as applying rules directly to query intents and parameters, the rephrasing approach functions horizontally at the string level across any query answering, parsing, or action fulfillment module.

Various Contextual Queries

The natural language processing area has typically focused on fully stated, stand-alone inquiries. Incorporating context accurately is difficult, given the range of contextual query forms. The table below shows how the Assistant’s rephrasing method might handle contextual difficulties (e.g., differentiating between referential and non-referential cases or identifying what context a query is referencing). Assistant can now reword follow-up questions and give context before answering.

System Design

The rephrasing system generates candidates using candidate generators. The best rephrasing candidate is chosen based on several indications.

Candidate Generation

A hybrid strategy has been employed that combines three techniques to create rephrasing candidates.

Query-based generators use grammatical and morphological rules to accomplish specific actions, such as replacing pronouns with context-based antecedents.

Generate candidates that match popular past questions or typical search patterns using statistics from the current query and its environment.

MUM and other Transformer-based generators learn to generate word sequences based on training examples. LaserTagger and FELIX are suitable for jobs with significant input-to-output text overlap, rapid inference, and hallucination-resistant (i.e., generating text that is not related to the input texts). Once given a query and its context, they can construct a sequence of text modifications to rephrase the input query by signaling which context should be maintained and which words should be changed.

Candidate Scoring

Signals are extracted for each rephrasing candidate, and apply ML to choose the best. The current query and its environment determine some signals. Is the current query like the last? Is the current query stand-alone or incomplete? Other indications are candidate-specific. How much context is preserved? Is the candidate linguistically sound? Etc.

Recent BERT and MUM model signals have considerably improved the ranker’s performance, resolving around one-third of the recall headroom while reducing false positives on non-contextual query sequences (and therefore do not require a rephrasing).

Conclusion

The technique described above attempts to handle contextual queries by rephrasing them so that they can be fully answered without referencing additional data during the fulfillment phase. This technique is agnostic to the query-fulfilling mechanisms, making it applicable as a horizontal layer before further processing.

Given the diversity of circumstances in human languages, a hybrid strategy combines linguistic rules, logs, and ML models based on state-of-the-art Transformer techniques. Assistant can rephrase and accurately comprehend most contextual inquiries by producing and grading rephrasing possibilities for each query and its context. With the Assistant’s ability to handle most linguistic allusions, users can have more natural interactions. To make multi-turn discussions even easier, Assistant users can select Continued Conversation mode, which allows them to ask questions without saying “Hey Google” between them.

Source: https://ai.googleblog.com/2022/05/contextual-rephrasing-in-google.html

Credit: Source link

Comments are closed.