Researchers Developed A BackPropagation-Free Supervised Learning Framework Based on Artificial Neural Network That Facilitates The Transition From a Monadic Pavlovian Single Input–Teacher Association on an AMLE to any Arbitrary n Input–Teacher Associations

Current AI models are heavily based on backpropagation-based learning, where an estimate of the ground truth is outputted by the model from input data in forward propagation. The model learns by updating the model parameters based on the backpropagation of the deviation between the estimate and the actual ground truth. However, this type of learning requires very high numbers of model parameters to learn from high amounts of data and thus involves heavy amounts of computation. To address this, researchers from the University of Oxford experimented using Pavlovian type Associative Learning. They realized this on a photonic IC that can learn from data in a backpropagation-free framework, thus resulting in lower computational complexity and higher speed and bandwidth.

Source: https://opg.optica.org/optica/fulltext.cfm?uri=optica-9-7-792&id=478804

Pavlov demonstrated in his famous experiment that if a bell is rung along with food, the dog associates both signals, and even in the absence of any of them, the dog salivates. Co-learning or associative learning has its roots there. When a neuron receives a sensory signal, it generates an action potential. So, when an unconditioned stimulus s1 is sent along with conditioned stimulus s2 and an association is formed between them, and the response from the neuron is triggered, the neuron learns the response, and the response is triggered even when one of both signals is absent. So, to realize this learning on a physical device, the circuit must be able to associate two inputs and store the association.  

The researchers have used a photonic Associative Monadic Learning Element (AMLE) to implement this functionality. The AMLE consisted of a coupled waveguide and a phase change material (they used Ge2Sb2Te5(GST)). The GST has two phases: Amorphous and Crystalline. The initial state of the material is Crystalline, and then there is no association between the two inputs (learning pulses). When the two signals are given simultaneously, the material starts to amorphize, resulting in a change of coupling between the waveguides. The learning threshold is the point at which the outputs from both the inputs are indistinguishable. The association between the inputs occurs when there is a specific phase delay between the inputs. 

Source: https://opg.optica.org/optica/fulltext.cfm?uri=optica-9-7-792&id=478804

Pavlovian associative learning has a similar methodology as supervised learning, where the input is paired with the ground truth (teacher) signal to supervise the learning process. In the input layer, the researchers have used Mach-Zender Modulators (MZMs) to split the input and teacher signal equally into stable optical phases using integrated NiCr thermo-optic heaters. MZMs also provide wavelength multiplexity to feed multiple signals into multiple AMLEs. The input-teacher pair is formed in the associative layer, drawing a transmission response. 

Source: https://opg.optica.org/optica/fulltext.cfm?uri=optica-9-7-792&id=478804

The researchers have used this device for image classification. The device is trained in a supervised fashion, and the model learns to generalize images of separate classes. For example, in a cat vs. non-cat image classification, the teacher signal is a cat image associated with several other cat images to draw a response. The responses from multiple AMLEs are aggregated and rearranged to form a lower-level generalized representation of a cat. During testing, the output layer forms a representation of the input image, and the model classifies it as cat or non-cat based on its similarity percentage with the generalized trained representation.

Source: https://opg.optica.org/optica/fulltext.cfm?uri=optica-9-7-792&id=478804

This research paves a new way in modern AI, where the computational overhead of current technology will be significantly reduced. The capability is limited as this is the first such work in this direction. The model only learns in a supervised way, cannot capture deep features, and is currently limited to simple tasks. So, there is a scope of enormous work to be done in this direction shortly.

References:

  1. James Y. S. Tan, Zengguang Cheng, Johannes Feldmann, Xuan Li, et al; Monadic Pavlovian associative learning in a backpropagation-free photonic network; Optica, Volume 9, Issue 7, pp. 792-802 (2022)

[Main source paper. All the images are taken from this paper.]

  1. J. Misra & I. Saha; Artificial neural networks in hardware; A survey of two decades of progress; Neurocomputing 74, 239-255 (2010)
  1. M. Ziegler, R. Soni, T. Patelczyk, M. Ignatov, T. Bartsch, P. Meuffels & H. Kohlstedt; An electronic version of Pavlov’s dog, Adv. Function. Mater 22, 2744–2749 (2012)
  1. J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. L. Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice & H. Bhaskaran. Parallel convolutional processing using an integrated photonic tensor core, Nature 589, 52-58 (2021). 


I’m Arkaprava from Kolkata, India. I have completed my B.Tech. in Electronics and Communication Engineering in the year 2020 from Kalyani Government Engineering College, India. During my B.Tech. I’ve developed a keen interest in Signal Processing and its applications. Currently I’m pursuing MS degree from IIT Kanpur in Signal Processing, doing research on Audio Analysis using Deep Learning. Currently I’m working on unsupervised or semi-supervised learning frameworks for several tasks in audio.


Credit: Source link

Comments are closed.