According To The Latest AI Research From Graz University, Intel’s Neuromorphic Chips Are UpTo 16 Times More Energy Efficient For Deep Learning

This Article is written as a summay by Marktechpost Staff based on the Research Paper 'A Long Short-Term Memory for AI Applications in
Spike-based Neuromorphic Hardware'. All Credit For This Research Goes To The Researchers of This Project. Check out the paper, reference article.

Please Don't Forget To Join Our ML Subreddit

Due to their high power consumption, new AI methods that utilize DNNs pose a significant barrier to broader deployment, particularly in edge devices. The use of spike-based neuromorphic electronics is one potential solution to this issue. This study direction is influenced mainly by the brain, which operates even more sophisticated and larger neural networks while consuming only 20W of energy. Neurons in the brain emit a signal (spike) only a few times per second on average, which is crucial in their incredible energy efficiency. In comparison, the units of a typical DNN emit output values several orders of magnitude more frequently and consume energy at a higher rate. However, it has remained unknown which sorts of DNNs for modern AI solutions can be implemented with sparsely active neurons in neuromorphic hardware in an energy-efficient manner. This typically necessitates a revisiting of DNN design ideas. 

Deep Neural Networks (DNNs) that perform sequence processing tasks often employ Long Short-Term Memory (LSTM) units, which are difficult to imitate with few spikes. A component of several biological neurons, delayed after-hyperpolarizing (AHP) currents following each spike, provides an effective solution. AHP-currents are implementable on neuromorphic hardware that supports multi-compartment neuron models, such as the Loihi processor from Intel. The principle of filter approximation explains why AHP-neurons can mimic the functionality of LSTM units. This results in a highly energy-efficient method for classifying time series. In addition, it provides the foundation for implementing with very sparse firing an essential class of big DNNs that extract relations between words and sentences in a text to answer text-related questions.

A more particular open challenge is how the LSTM units of DNNs for sequence processing tasks can be implemented energy-efficiently in spike-based neuromorphic hardware. Networks of spiking neurons (SNNs) are endowed with the same working memory capabilities as LSTM units in DNNs due to a trait of biological neurons, the presence of slowly changing internal currents that have not been represented in neuromorphic hardware models.

A significant distinction between biological neurons and traditional spiking neuron models is that biological neurons maintain a relatively restricted membrane potential regime. In contrast, when the network is trained using regularization terms to generate low firing rates, the membrane potential of the models frequently assumes highly negative values. This effectively eliminates many of them from the network’s current computation. A novel membrane voltage regularization technique is presented that mitigates this issue and facilitates the building of exceptionally sparsely firing spike-firing DNNs.

Studies have validated Neuromorphic chips demonstrating that they operate massive deep learning networks far more efficiently than non-neuromorphic hardware.

This may become significant as AI use grows. The research was conducted using Intel’s Loihi 2 silicon, a second-generation experimental neuromorphic chip announced by Intel Labs last year and containing approximately one million artificial neurons.

The study paper, “A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware,” which was published in Nature Machine Intelligence, asserts that Intel chips are up to 16 times more energy efficient in deep learning applications than non-neuromorphic hardware. The tested hardware consisted of 32 Loihi chips.

Source: https://arxiv.org/pdf/2107.03992.pdf

Although it may seem evident that specialized hardware would be more effective for deep learning tasks, TU Graz asserts that this is the first time this has been experimentally verified.

According to TU Graz, this is significant because such deep learning models are the topic of artificial intelligence research globally, intending to be used in practical applications. However, the energy consumption of the requisite hardware to operate the models is a significant barrier to the widespread implementation of such systems.

In another paper titled “Brain-inspired computing needs a master plan,” the authors note that “the astounding achievements of high-end AI systems like DeepMind’s AlphaGo and AlphaZero require thousands of parallel processing units, each of which can consume approximately 200 watts.”

In the report from TU Graz, the researchers analyzed algorithms involving temporal processes. One example is the system’s ability to answer questions regarding a previously delivered story or understand the context-based relationships between objects and people.

In this regard, the model simulated human short-term memory, or at least a memory system believed to be utilized by the human brain. The researchers connected two types of deep learning networks – feedback neural networks responsible for short-term memories and a feed-forward network – to identify which discovered correlations are crucial for completing the given job.

Neuromorphic hardware, such as the Loihi chips, is well-suited for the fast, sparse, and unpredictable network activity patterns found in the brain and is required for the most energy-efficient AI applications.

Neuromorphic technology can increase the energy efficiency of deep learning tasks by rethinking their implementation from a biological perspective – 

  • Intel’s neurochips may one day be integrated into PCs or cloud services.
  • MIT, Amazon, TSMC, ASML, and others will collaborate on planet-safe AI hardware.
  • DARPA secures funding for the FENCE project, an intelligent camera that only transmits images when pixels change.
  • Greetings, chip designers; it is no longer expensive to employ these CPU cores (well, not at first, anyway).

Neuromorphic chips have the potential for widespread use due to their low power consumption. According to Intel, their neuromorphic chip technology may be integrated into a CPU to bring energy-efficient AI processing to systems, or neuromorphic processors might be made accessible as a cloud service.

Credit: Source link

Comments are closed.