Researchers at Los Alamos National Laboratory Explore New Ways to Compare Neural Networks and Expose How Artificial Intelligence (AI) Works

There is evidence that neural networks rely on comparable characteristics for categorization across designs and weight initializations. Previous research has presented the universality hypothesis, which states that when neural networks are trained on the same input, they acquire the exact representations regardless of design or training technique. They are aware, however, that various designs and random initializations frequently provide different results. This work considers the influence of an adversarial robustness constraint during training to rethink the similarity challenge from a unique perspective. Robust training is a well-known method for reducing the susceptibility of a network’s outputs to modest input changes.

It is commonly understood that improving neural network resilience against hostile instances reduces accuracy. However, little emphasis has been devoted to the impact of comprehensive training on a model agreement. They empirically demonstrate, using a variety of similarity analysis approaches, that the representations and functions acquired by networks of various topologies become much more similar as robustness rises. This conclusion suggests that resilience is a significant prior constraint on the learned tasks and maybe a strong enough condition to matter more than the specific network design.

Many approaches for measuring neural network similarity have been developed, including centered kernel alignment (CKA), Canonical Correlation Analysis (CCA), singular vector canonical correlation analysis (SVCCA), subspace match, and others. This study demonstrates that existing approaches for measuring representation similarity overestimate similarity owing to feature correlations. They provide model-specific datasets with each data point modified to contain just the attributes utilized by the model to remove the confounder’s influence of correlated features. While this theory is startling, it is compatible with previous theoretical discoveries indicating that current networks are under-parameterized to represent smooth functions in large dimensions.

The similarity of robust networks with diverse designs is exceptionally high, and influential networks display considerable resemblance with non-robust networks, demonstrating intense entanglement between strong and non-robust representations. Using the innovative similarity metric devised by them, they give a complete analysis assessing the similarity across neural networks as a function of the robustness level utilized in adversarial training.

Many approaches for measuring neural network similarity have been developed, including centered kernel alignment (CKA), Canonical Correlation Analysis (CCA), singular vector canonical correlation analysis (SVCCA), subspace match, and others. This study demonstrates that existing approaches for measuring representation similarity overestimate similarity owing to feature correlations. They provide model-specific datasets with each data point modified to contain just the attributes utilized by the model to remove the confounder’s influence of correlated features. While this theory is startling, it is compatible with previous theoretical discoveries indicating that current networks are under-parameterized to represent smooth functions in large dimensions.

Suppose neural networks discover a solution mainly based on the data rather than the learning method, random initialization, or design. In that case, the representations acquired by neural networks may be able to provide insight into the underlying structure of data. The increased similarity between robust neural networks may imply that empirical analysis of a single neural solid network will reveal insight into the representations learned by every other strong neural network. As a result, researchers may better grasp the nature of adversarial robustness.

Paper: https://openreview.net/pdf?id=BGfLS_8j5eq

References:

  • https://www.newswise.com/articles/new-method-for-comparing-neural-networks-exposes-how-artificial-intelligence-works
  • https://losalamosreporter.com/2022/09/13/lanl-new-method-for-comparing-neural-networks-exposes-how-artificial-intelligence-works/
Please Don't Forget To Join Our ML Subreddit


Content Writing Consultant Intern at Marktechpost.


Credit: Source link

Comments are closed.