A New AI Study from MIT Shows How Deep Neural Networks Don’t See the World the Way We Do

In the pursuit of replicating the complex workings of the human sensory systems, researchers in neuroscience and artificial intelligence face a persistent challenge: the disparity in invariances between computational models and human perception. As highlighted in recent studies, including one conducted by a team of scientists, artificial neural networks designed to mimic the various functions of the human visual and auditory systems often exhibit invariances that do not align with those found in human sensory perception. This contradiction raises questions about the underlying principles guiding the development of these models and their applicability in real-world scenarios.

Historically, attempts to address the issue of invariance discrepancies between computational models and human perception have involved investigating areas such as model vulnerability to adversarial perturbations or the impact of noise and translations on model judgments. 

Model Metamers: The concept of model metamers is inspired by human perceptual metamers, which are stimuli that, although physically distinct, produce indistinguishable responses at certain stages of the sensory system. In the context of computational models, model metamers are synthetic stimuli with nearly identical activations in a model as specific natural images or sounds. The critical question is whether humans can recognize these model metamers as belonging to the same class as the biological signals they are matched to.

The results of this study shed light on the significant divergence between the invariances present in computational models and those in human perception. The research team generated model metamers from various deep neural network models of vision and audition, including both supervised and unsupervised learning models. In a surprising discovery, model metamers produced at the late stages of these models were consistently unrecognizable to human observers. This suggests many invariances in these models are not shared with the human sensory system.

The efficacy of these model metamers in exposing the differences between models and humans is further demonstrated by their predictability. Interestingly, the human recognizability of model metamers was strongly correlated with their recognition by other models, suggesting that the gap between humans and models lies in the idiosyncratic invariances specific to each model.

In conclusion, introducing model metamers is a significant step toward understanding and addressing the disparities between computational models of sensory systems and human sensory perception. These synthetic stimuli offer a fresh perspective on researchers’ challenges in creating more biologically faithful models. While there is much work to be done, the concept of model metamers provides a promising benchmark for future model evaluation and the potential for improved artificial systems that better align with the intricacies of human sensory perception.


Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..


Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.


▶️ Now Watch AI Research Updates On Our Youtube Channel [Watch Now]

Credit: Source link

Comments are closed.