Researchers Look To Neuroscientists to Overcome Dataset Bias

A team of researchers at MIT, Harvard University, and Fujitsu, Ltd. looked for how a machine-learning model could overcome dataset bias. They relied on a neuroscience approach to study how training data affects whether an artificial neural network can learn to recognize objects it has never seen. 

The research was published in Nature Machine Intelligence

Diversity in Training Data

The study’s results demonstrated that diversity in training data influences whether a neural network is able to overcome bias. However, data diversity can also have a negative impact on the network’s performance. The researchers also showed that the way a neural network is trained can also impact whether it can overcome a biased dataset. 

Xavier Boix is a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM). He is also senior author of the paper. 

“A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place,” says Boix.

The team embraced the mind of a neuroscientist to develop the new approach. According to Boix, it is common to use controlled datasets in experiments, so the team built datasets that contained images of different objects in various poses. They then controlled the combinations so some datasets were more diverse than others. A dataset with more images that show objects from only one viewpoint is less diverse, while one with more images showing objects from multiple viewpoints is more diverse. 

The researchers took these datasets and used them to train a neural network for image classification. They then studied how good it was at identifying objects from viewpoints the network did not see during training. 

They found that the more diverse datasets allow the network to better generalize new images or viewpoints, and this is crucial to overcoming bias. 

“But it is not like more data diversity is always better; there is a tension here. When the neural network gets better at recognizing new things it hasn’t seen, then it will become harder for it to recognize things it has already seen,” Boix says.

Methods for Training Neural Networks

The team also found that a model trained separately for each task is better able to overcome bias compared to a model trained for both tasks together. 

“The results were really striking. In fact, the first time we did this experiment, we thought it was a bug. It took us several weeks to realize it was a real result because it was so unexpected,” Boix continues.

Deeper analysis revealed that neuron specialization is involved in this process. When the neural network is trained to recognize objects in images, two types of neurons emerge. One neuron specializes in recognizing the object category while the other specializes in recognizing the viewpoint. 

The specialized neurons become more prominent when the network is trained to perform tasks separately. However, when a network is trained to complete both tasks at the same time, some neurons become diluted. This means they don’t specialize in one task, and they are more likely to get confused. 

“But the next question now is, how did these neurons get there? You train the neural network and they emerge from the learning process. No one told the network to include these types of neurons in its architecture. That is the fascinating thing,” Boix says.

The researchers will look to explore this question in their future work, as well as apply the new approach to more complex tasks. 

Credit: Source link

Comments are closed.