Researchers From Heidelberg and University of Bern Propose A New Method To Enhance Deep Learning Using First-Spike Times
A technique to achieve quick and energy-efficient computing using the concept of spiking is ground-breaking in research. When the membrane potential, an attribute related to a neuron’s membrane electrical charge, reaches a particular threshold value, the neuron fires and generates a signal which reaches the other neurons. This eventually results in either an increase or a decrease in response to the signal. If a particular model fires immediately after the threshold value is crossed, it is referred to as a spiking neuron model.
This strategy is based on the Time-to-First Spike coding scheme, in which the activity of neurons is inversely proportional to firing delay. Julian Goeltz, one of the researchers working on this model, says that Hesham Mostafa, a researcher at the University of California, came up with the facts that the timing of individual spikes could be effectively used for information processing. The major challenge lies in the complicated relationship between synaptic inputs and outputs of spiking neurons.
Goeltz, along with his fellow researchers at Heidelberg University and the University of Bern, began by developing a mathematical model that could approach the issue of deep learning based on temporal coding. The following steps involved integrating this with the BrainScaleS system, a neuromorphic system developed to work as a substrate for brain-like computation.
This research addresses the “Credit Assignment Problem,” which includes the analysis of synapses in a neural network to understand a network’s output and decide how much credit should be given for each synapse in a particular prediction. One of the approaches to solve this problem is the “Error Backpropagation Algorithm,” which works based on the principle that the error in the topmost layer of the network can be propagated back through the network to inform the synapses about their individual contribution to the error and work on resolving them separately.
In a neural network, each input spike bumps the potential of a neuron up or down, whose size depends on the synaptic weight. On accumulation of the appropriate number of bumps, the neuron fires and sends a spike to its neighboring neuron. In this manner, the entire spiking network can be designed in the desired manner. According to the researchers, this solution is a hardware-compatible variant of error backpropagation. Owing to its spike-based nature, it has proven to be a quick and efficient solution. The framework encourages the neurons to spike only once at the quickest speed possible, assuring very little data flow in the neural network to complete the task. In addition to this, the neuron dynamics of BrainScaleS hardware are extremely fast, ensuring high information processing speed.
One central question that remains is: Why do the neurons in our brain communicate with spikes? It looks like this question is no longer unanswered as this model provides an argument for the functional superiority of spikes. In the human cortex, no two neurons are identical, and this model is suited well even for such diverse substrates.
As a result of greater efficiency in data processing, the framework developed consumes very little power.
This framework has been tested using a platform for basic neuromorphic research, but it does not end here. The researchers want to implement this framework in the real world online and embedded learning scenarios and, in the future, train it on time-varying data such as audio and video recordings. Deep learning models for spiking networks are still in their nascent stages, and many things are still unexplored.
Paper: https://arxiv.org/pdf/1912.11443.pdf
Other Paper: https://papers.nips.cc/paper/2015/file/10a5ab2db37feedfdeaab192ead4ac0e-Paper.pdf
Source: https://techxplore.com/news/2021-10-framework-deep-first-spike.html
Suggested
Credit: Source link
Comments are closed.