Meta AI Researchers Introduce a Machine Learning Model that Explores Decoding Speech Perception from Non-Invasive Brain Recordings

Deciphering speech from brain activity, a longstanding goal in healthcare and neuroscience, has recently seen progress with invasive devices. Deep-learning algorithms trained on intracranial recordings can decode basic linguistic elements. However, extending this to natural speech and non-invasive brain recordings poses a challenge. Researchers from Meta introduce a machine learning model employing contrastive learning to decode perceived speech representations from non-invasive recordings. Their method combines four datasets and achieves promising results, offering a potential pathway for language decoding from brain activity without invasive procedures, with implications for healthcare and neuroscience.

Researchers explore decoding speech from non-invasive brain activity recordings, building upon recent successes with invasive devices in decoding linguistic elements. Their method introduces a contrastive learning model trained to decode self-supervised speech representations. Comparisons with invasive studies highlight their larger vocabulary, and potential applications in speech production are discussed. Ethical approvals were obtained for healthy adult volunteers’ datasets involving passive listening.

Decoding speech from non-invasive brain recordings is a significant challenge in healthcare and neuroscience. While invasive devices have progressed, extending this to natural speech remains difficult. Their approach presents a model trained with contrastive learning to decode self-supervised speech representations from non-invasive data. Their advancement offers promise in decoding language from brain activity without invasive procedures.

Their method introduces a neural decoding task to decipher perceived speech from non-invasive brain recordings. The model is trained and evaluated by utilizing four public datasets with 175 volunteers recorded via MEG or EEG while listening to stories. It employs a common convolutional architecture, simultaneously trained on multiple participants. Comparative analysis with baselines underscores the significance of the contrastive objective and pretrained speech representations. Additionally, the decoder’s predictions primarily rely on lexical and contextual semantic representations.

Decoding accuracy varied among participants and datasets. Word-level predictions showed accurate identification of correct words and discrimination from negative candidates. Comparisons with baselines underscored the significance of the contrastive objective, pretrained speech representations, and a shared convolutional architecture in enhancing decoding accuracy. Decoder predictions primarily relied on lexical and contextual semantic representations.

Researchers introduce a contrastive learning-based model for decoding perceived speech from non-invasive brain recordings. Their model demonstrates promising results, achieving an average accuracy of up to 41% in speech segment identification and up to 80% accuracy in the best-performing participants. Comparison with baselines underscores the significance of contrastive objectives, pretrained speech representations, and a shared convolutional architecture in enhancing decoding accuracy. Decoder predictions primarily rely on lexical and contextual semantics. Their work holds potential for non-invasive language decoding in healthcare and neuroscience applications.

Future research should elucidate the factors contributing to decoding accuracy variations among participants and datasets. Investigating the model’s performance in solving more intricate linguistic attributes and real-time speech perception scenarios is essential. Assessing the model’s generalizability to diverse brain recording or imaging techniques is imperative. Exploring its capacity to capture prosody and phonetic features would offer a comprehensive insight into speech decoding.


Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..


Hello, My name is Adnan Hassan. I am a consulting intern at Marktechpost and soon to be a management trainee at American Express. I am currently pursuing a dual degree at the Indian Institute of Technology, Kharagpur. I am passionate about technology and want to create new products that make a difference.


▶️ Now Watch AI Research Updates On Our Youtube Channel [Watch Now]

Credit: Source link

Comments are closed.