Google AI Researchers Propose A Novel Training Method Called ‘DEEPCTRL’ That Integrates Rules Into Deep Learning

Source: https://arxiv.org/pdf/2106.07804.pdf

As the number and range of their training data grow, deep neural networks (DNNs) provide increasingly accurate outputs. While investing in high-quality, large-scale labeled datasets are one way to enhance models, another is to use previous information, referred to as “rules” – reasoning heuristics, equations, associative logic, or restrictions. Consider a classic physics problem in which a model is tasked with predicting the future state of a double pendulum system. While the model may learn to expect the system’s total energy at a particular moment in time only from empirical data, unless it is additionally given an equation that incorporates known physical restrictions, such as energy conservation, it will typically overestimate the energy. On its own, the model cannot represent such well-established physical principles. How could such rules be taught, so DNNs acquire the appropriate information rather than merely learning from the data?

Researchers present Deep Neural Networks with Controllable Rule Representations (DeepCTRL), an approach for providing rules for a model that is agnostic to data type and model architecture and can be applied to any kind of rule defined for inputs and outputs, in “Controlling Neural Networks with Rule Representations,” published at NeurIPS 2021. DeepCTRL guarantees that models adhere to rules more closely while simultaneously boosting accuracy on downstream activities, improving model dependability and user confidence. DeepCTRL’s main benefit is that it does not require retraining to adjust rule strength. The user can alter rule strength based on the desired operating point of accuracy during inference. We also provide a unique input perturbation approach that allows DeepCTRL to be applied to non-differentiable restrictions. We illustrate the usefulness of DeepCTRL in teaching rules for deep learning in real-world areas where rules are crucial, such as physics and healthcare. DeepCTRL also allows for additional use cases, including hypothesis testing rules on data samples and unsupervised adaptation based on shared rules across datasets.

The advantages of learning through rules are as follows:

  • Rules can give additional information for scenarios with little data, enhancing test accuracy.
  • The lack of comprehension of the logic underlying DNNs’ thinking and discrepancies is a key roadblock to their wider usage. Rules can increase the dependability and user confidence in DNNs by reducing differences.
  • DNNs are sensitive to input changes that are unnoticeable to humans. These modifications can be reduced using rules since the model search space is confined further to decrease underspecification.

Learning from Rules and Tasks as a Group

The traditional technique of applying regulations is included by adding them in the loss computation. This technique has three shortcomings that we want to address: (i) Before learning, the rule’s strength must be determined (thus, the trained model cannot operate flexibly based on how much the data satisfies the rule); (ii) Rule strength cannot be adapted to target data at inference if the training setup is mismatched; and (iii) the rule-based objective must be differentiable for learnable parameters (to enable learning from labeled data).

DeepCTRL alters canonical training by combining rule representations with data representations, essential for controlling rule strength at inference time. These representations are concatenated stochastically during training using a control parameter, represented by, to generate a single representation. The rule’s strength on the output decision can be strengthened by increasing the value. Users may adjust the model’s behavior to adapt to unknown inputs by altering at inference.

Source: https://ai.googleblog.com/2022/01/controlling-neural-networks-with-rule.html

DeepCTRL combines a data encoder with a rule encoder to create two latent representations that are linked to goals. The relative weight of each encoder is controlled by the control parameter, which is modifiable at inference.

Using Input Perturbations to Integrate Rules

When using rule-based objectives, the goals must be differentiable about the model’s learnable parameters. Many applicable rules are undifferentiable in terms of input. “A blood pressure reading of 140 or above is associated with an increased risk of cardiovascular disease.,” for example, is a rule that is difficult to integrate with traditional DNNs. We also present a unique input perturbation approach for applying DeepCTRL to non-differentiable constraints, which involves adding tiny perturbations (random noise) to input features and creating a rule-based control depending on whether the output is in the desired direction.

Case Studies

DeepCTRL is being tested on machine learning use cases in physics and healthcare, where rules are very critical.

  • Improved Reliability Based on Physics Principles:

The verification ratio is the proportion of output samples that fulfill the criteria is used to measure a model’s dependability. It may be helpful to operate at a higher verification ratio, primarily if the rules are always valid, like in natural sciences. A more excellent rule verification ratio, and hence more trustworthy predictions, can be produced by changing the control parameter.

  • Adapting to Healthcare Distribution Shifts:

Some rules’ strengths may vary depending on the subsets of the data applied to. For example, the link between cardiovascular disease and higher blood pressure is more vital in older individuals than in younger people regarding illness prediction. When the job is shared, but the data distribution and rule validity vary between datasets, DeepCTRL can adapt to the shifts in distribution by regulating.

Conclusions

Building interpretable, resilient, and trustworthy DNNs may need learning from rules. DeepCTRL is a novel way for incorporating rules into data-learned DNNs that we propose. DeepCTRL allows for rule strength control during inference without the need for retraining. We offer a unique perturbation-based rule encoding approach to combine arbitrary rules into meaningful representations. DeepCTRL is demonstrated in three ways: enhancing reliability based on well-known principles, analyzing candidate rules, and domain adaption based on rule strength.

Paper: https://arxiv.org/pdf/2106.07804.pdf

Reference: https://ai.googleblog.com/2022/01/controlling-neural-networks-with-rule.html

Credit: Source link

Comments are closed.