Neural decoding of music from the EEG
Abstract Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2023-01-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-022-27361-x |
_version_ | 1797952531450560512 |
---|---|
author | Ian Daly |
author_facet | Ian Daly |
author_sort | Ian Daly |
collection | DOAJ |
description | Abstract Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the electroencephalogram (EEG). In this study we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an accoustic decoder. Specifically, we first used a joint EEG-fMRI paradigm to record brain activity while participants listened to music. We then used fMRI-informed EEG source localisation and a bi-directional long-term short term deep learning network to first extract neural information from the EEG related to music listening and then to decode and reconstruct the individual pieces of music an individual was listening to. We further validated our decoding model by evaluating its performance on a separate dataset of EEG-only recordings. We were able to reconstruct music, via our fMRI-informed EEG source analysis approach, with a mean rank accuracy of 71.8% ( $$n~=~18$$ n = 18 , $$p~<~0.05$$ p < 0.05 ). Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% ( $$n~=~19$$ n = 19 , $$p~<~0.05$$ p < 0.05 ). This demonstrates that our decoding model may use fMRI-informed source analysis to aid EEG based decoding and reconstruction of acoustic information from brain activity and makes a step towards building EEG-based neural decoders for other complex information domains such as other acoustic, visual, or semantic information. |
first_indexed | 2024-04-10T22:49:02Z |
format | Article |
id | doaj.art-1e9813fb49e045b28fda13b1fdf73737 |
institution | Directory Open Access Journal |
issn | 2045-2322 |
language | English |
last_indexed | 2024-04-10T22:49:02Z |
publishDate | 2023-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj.art-1e9813fb49e045b28fda13b1fdf737372023-01-15T12:08:42ZengNature PortfolioScientific Reports2045-23222023-01-0113111710.1038/s41598-022-27361-xNeural decoding of music from the EEGIan Daly0Brain-Computer Interfacing and Neural Engineering Lab, Department of Computer Science and Electronic Engineering, University of EssexAbstract Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the electroencephalogram (EEG). In this study we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an accoustic decoder. Specifically, we first used a joint EEG-fMRI paradigm to record brain activity while participants listened to music. We then used fMRI-informed EEG source localisation and a bi-directional long-term short term deep learning network to first extract neural information from the EEG related to music listening and then to decode and reconstruct the individual pieces of music an individual was listening to. We further validated our decoding model by evaluating its performance on a separate dataset of EEG-only recordings. We were able to reconstruct music, via our fMRI-informed EEG source analysis approach, with a mean rank accuracy of 71.8% ( $$n~=~18$$ n = 18 , $$p~<~0.05$$ p < 0.05 ). Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% ( $$n~=~19$$ n = 19 , $$p~<~0.05$$ p < 0.05 ). This demonstrates that our decoding model may use fMRI-informed source analysis to aid EEG based decoding and reconstruction of acoustic information from brain activity and makes a step towards building EEG-based neural decoders for other complex information domains such as other acoustic, visual, or semantic information.https://doi.org/10.1038/s41598-022-27361-x |
spellingShingle | Ian Daly Neural decoding of music from the EEG Scientific Reports |
title | Neural decoding of music from the EEG |
title_full | Neural decoding of music from the EEG |
title_fullStr | Neural decoding of music from the EEG |
title_full_unstemmed | Neural decoding of music from the EEG |
title_short | Neural decoding of music from the EEG |
title_sort | neural decoding of music from the eeg |
url | https://doi.org/10.1038/s41598-022-27361-x |
work_keys_str_mv | AT iandaly neuraldecodingofmusicfromtheeeg |