This AI Paper Introduces POYO-1: An Artificial Intelligence Framework Deciphering Neural Activity across Large-Scale Recordings with Deep Learning

Researchers from Georgia Tech, Mila, Université de Montréal, and McGill University introduce a training framework and architecture for modeling neural population dynamics across diverse, large-scale neural recordings. It tokenizes individual spikes to capture fine temporal neural activity and employs cross-attention and a PerceiverIO backbone. A large-scale multi-session model is constructed from data from seven nonhuman primates with over 27,000 neural units and 100+ hours of recordings. The model demonstrates rapid adaptation to new sessions, enabling few-shot performance in various tasks showcasing a scalable approach for neural data analysis.

Their study introduces a scalable framework for modeling neural population dynamics in diverse large-scale neural recordings using Transformers. Unlike previous models that operated on fixed sessions with a single set of neurons, this framework can train across subjects and data from different sources. It leverages PerceiverIO and cross-attention layers to efficiently represent neural events, enabling few-shot performance for new sessions. The work showcases the potential of transformers in neural data processing and introduces an efficient implementation for improved computations.

Recent advancements in machine learning have highlighted the potential of scaling up with large pretrained models like GPT. In neuroscience, there’s a demand for a foundational model to bridge diverse datasets, experiments, and subjects for a more comprehensive understanding of brain function. POYO is a framework that enables efficient training across various neural recording sessions, even when dealing with different neuron sets and no known correspondences. It utilizes a unique tokenization scheme and the PerceiverIO architecture to model neural activity, showcasing its transferability and brain decoding improvements across sessions.

The framework models neural activity dynamics across diverse recordings using tokenization to capture temporal details and employ cross-attention and PerceiverIO architecture. A large multi-session model, trained on vast primate datasets, can adapt to new sessions with unspecified neuron correspondence for few-shot learning. Rotary Position Embeddings enhance the transformer’s attention mechanism. The approach uses 5 ms binning for neural activity and has achieved fine-grained results on benchmark datasets.

The neural activity decoding effectiveness of the NLB-Maze dataset was demonstrated by achieving an R2 of 0.8952 using the framework. The pretrained model delivered competitive results on the same dataset without weight modifications, indicating its versatility. The ability to adapt rapidly to new sessions with unspecified neuron correspondence for few-shot performance was demonstrated. The large-scale multi-session model exhibited promising performance in diverse tasks, emphasizing the framework’s potential for comprehensive neural data analysis at scale. 

In conclusion, a unified and scalable framework for neural population decoding offers rapid adaptation to new sessions with unspecified neuron correspondence and achieves strong performance on diverse tasks. The large-scale multi-session model, trained on data from nonhuman primates, showcases the framework’s potential for comprehensive neural data analysis. The approach provides a robust tool for advancing neural data analysis and enables training at scale, deepening insights into neural population dynamics.


Check out the Paper, and Project. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on Telegram and WhatsApp.


Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.


🔥 Meet Retouch4me: A Family of Artificial Intelligence-Powered Plug-Ins for Photography Retouching

Credit: Source link

Comments are closed.