IBM Researchers Introduce an Analog AI Chip for Deep Learning Inference: Showcasing Critical Building Blocks of a Scalable Mixed-Signal Architecture

The ongoing AI revolution, set to reshape lifestyles and workplaces, has seen deep neural networks (DNNs) play a pivotal role, notably with the emergence of foundation models and generative AI. Yet, the conventional digital computing frameworks that host these models hinder their potential performance and energy efficiency. While AI-specific hardware has emerged, many designs separate memory and processing units, resulting in data shuffling and reduced efficiency.

IBM Research has pursued innovative ways to reimagine AI computation, leading to the concept of analog in-memory computing, or analog AI. This approach draws inspiration from neural networks in biological brains, where synapse strength governs neuron communication. Analog AI employs nanoscale resistive devices like Phase-change memory (PCM) to store synaptic weights as conductance values. PCM devices transition between amorphous and crystalline states, encoding a range of values and enabling local storage of weights with non-volatility.

A significant stride towards making analog AI a reality has been achieved by IBM Research in a recent Nature Electronics publication. They introduced a cutting-edge mixed-signal analog AI chip tailored for various DNN inference tasks. This chip, fabricated at IBM’s Albany NanoTech Complex, features 64 analog in-memory compute cores, each housing a 256-by-256 crossbar array of synaptic unit cells. Integrated compact, time-based analog-to-digital converters facilitate seamless transitions between analog and digital domains. Moreover, digital processing units within each core handle basic neuronal activation functions and scaling operations.

The chip’s architecture empowers each core to handle computations associated with a DNN layer. Synaptic weights are encoded as analog conductance values in PCM devices. A global digital processing unit sits at the chip’s center, managing intricate operations crucial for specific neural network executions. The chip’s digital communication pathways link all tiles and the central digital processing unit.

In terms of performance, the chip demonstrated an impressive accuracy of 92.81% on the CIFAR-10 image dataset, marking a remarkable achievement in analog in-memory computing. The research seamlessly integrated analog in-memory computing with digital processing units and a digital communication fabric, resulting in a more efficient computing engine. The chip’s throughput per area for Giga-operations per second (GOPS) surpassed previous resistive memory-based in-memory computing chips by over 15 times while maintaining energy efficiency.

Build your personal brand with Taplio! 🚀 The 1st AI-powered tool to grow on LinkedIn (Sponsored)

Leveraging breakthroughs in analog-to-digital converters, multiply-accumulate-compute capabilities, and digital compute blocks, IBM Research achieved many key components necessary for a fast and low-power analog AI inference accelerator chip. A previously proposed accelerator architecture combined numerous analog in-memory computing tiles with specialized digital compute cores connected via a parallel 2D mesh. This vision and hardware-aware training techniques are anticipated to deliver software-equivalent neural network accuracies across various models in the foreseeable future.


Check out the Paper and Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 28k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.


🔥 Use SQL to predict the future (Sponsored)

Credit: Source link

Comments are closed.