Music emotion recognition based on temporal convolutional attention network using EEG

Music is one of the primary ways to evoke human emotions. However, the feeling of music is subjective, making it difficult to determine which emotions music triggers in a given individual. In order to correctly identify emotional problems caused by different types of music, we first created an elect...

Full description

Bibliographic Details
Main Authors: Yinghao Qiao, Jiajia Mu, Jialan Xie, Binghui Hu, Guangyuan Liu
Format: Article
Language:English
Published: Frontiers Media S.A. 2024-03-01
Series:Frontiers in Human Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnhum.2024.1324897/full
_version_ 1797238624614350848
author Yinghao Qiao
Yinghao Qiao
Yinghao Qiao
Jiajia Mu
Jiajia Mu
Jiajia Mu
Jialan Xie
Jialan Xie
Jialan Xie
Binghui Hu
Binghui Hu
Binghui Hu
Guangyuan Liu
Guangyuan Liu
Guangyuan Liu
author_facet Yinghao Qiao
Yinghao Qiao
Yinghao Qiao
Jiajia Mu
Jiajia Mu
Jiajia Mu
Jialan Xie
Jialan Xie
Jialan Xie
Binghui Hu
Binghui Hu
Binghui Hu
Guangyuan Liu
Guangyuan Liu
Guangyuan Liu
author_sort Yinghao Qiao
collection DOAJ
description Music is one of the primary ways to evoke human emotions. However, the feeling of music is subjective, making it difficult to determine which emotions music triggers in a given individual. In order to correctly identify emotional problems caused by different types of music, we first created an electroencephalogram (EEG) data set stimulated by four different types of music (fear, happiness, calm, and sadness). Secondly, the differential entropy features of EEG were extracted, and then the emotion recognition model CNN-SA-BiLSTM was established to extract the temporal features of EEG, and the recognition performance of the model was improved by using the global perception ability of the self-attention mechanism. The effectiveness of the model was further verified by the ablation experiment. The classification accuracy of this method in the valence and arousal dimensions is 93.45% and 96.36%, respectively. By applying our method to a publicly available EEG dataset DEAP, we evaluated the generalization and reliability of our method. In addition, we further investigate the effects of different EEG bands and multi-band combinations on music emotion recognition, and the results confirm relevant neuroscience studies. Compared with other representative music emotion recognition works, this method has better classification performance, and provides a promising framework for the future research of emotion recognition system based on brain computer interface.
first_indexed 2024-04-24T17:38:36Z
format Article
id doaj.art-814c507355c749fdbcda4219d4cad9f7
institution Directory Open Access Journal
issn 1662-5161
language English
last_indexed 2024-04-24T17:38:36Z
publishDate 2024-03-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Human Neuroscience
spelling doaj.art-814c507355c749fdbcda4219d4cad9f72024-03-28T04:25:12ZengFrontiers Media S.A.Frontiers in Human Neuroscience1662-51612024-03-011810.3389/fnhum.2024.13248971324897Music emotion recognition based on temporal convolutional attention network using EEGYinghao Qiao0Yinghao Qiao1Yinghao Qiao2Jiajia Mu3Jiajia Mu4Jiajia Mu5Jialan Xie6Jialan Xie7Jialan Xie8Binghui Hu9Binghui Hu10Binghui Hu11Guangyuan Liu12Guangyuan Liu13Guangyuan Liu14School of Electronic and Information Engineering, Southwest University, Chongqing, ChinaInstitute of Affective Computing and Information Processing, Southwest University, Chongqing, ChinaChongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, ChinaSchool of Electronic and Information Engineering, Southwest University, Chongqing, ChinaInstitute of Affective Computing and Information Processing, Southwest University, Chongqing, ChinaChongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, ChinaSchool of Electronic and Information Engineering, Southwest University, Chongqing, ChinaInstitute of Affective Computing and Information Processing, Southwest University, Chongqing, ChinaChongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, ChinaSchool of Electronic and Information Engineering, Southwest University, Chongqing, ChinaInstitute of Affective Computing and Information Processing, Southwest University, Chongqing, ChinaChongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, ChinaSchool of Electronic and Information Engineering, Southwest University, Chongqing, ChinaInstitute of Affective Computing and Information Processing, Southwest University, Chongqing, ChinaChongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, ChinaMusic is one of the primary ways to evoke human emotions. However, the feeling of music is subjective, making it difficult to determine which emotions music triggers in a given individual. In order to correctly identify emotional problems caused by different types of music, we first created an electroencephalogram (EEG) data set stimulated by four different types of music (fear, happiness, calm, and sadness). Secondly, the differential entropy features of EEG were extracted, and then the emotion recognition model CNN-SA-BiLSTM was established to extract the temporal features of EEG, and the recognition performance of the model was improved by using the global perception ability of the self-attention mechanism. The effectiveness of the model was further verified by the ablation experiment. The classification accuracy of this method in the valence and arousal dimensions is 93.45% and 96.36%, respectively. By applying our method to a publicly available EEG dataset DEAP, we evaluated the generalization and reliability of our method. In addition, we further investigate the effects of different EEG bands and multi-band combinations on music emotion recognition, and the results confirm relevant neuroscience studies. Compared with other representative music emotion recognition works, this method has better classification performance, and provides a promising framework for the future research of emotion recognition system based on brain computer interface.https://www.frontiersin.org/articles/10.3389/fnhum.2024.1324897/fullEEGmusic emotion recognitionCNNBiLSTMself-attention
spellingShingle Yinghao Qiao
Yinghao Qiao
Yinghao Qiao
Jiajia Mu
Jiajia Mu
Jiajia Mu
Jialan Xie
Jialan Xie
Jialan Xie
Binghui Hu
Binghui Hu
Binghui Hu
Guangyuan Liu
Guangyuan Liu
Guangyuan Liu
Music emotion recognition based on temporal convolutional attention network using EEG
Frontiers in Human Neuroscience
EEG
music emotion recognition
CNN
BiLSTM
self-attention
title Music emotion recognition based on temporal convolutional attention network using EEG
title_full Music emotion recognition based on temporal convolutional attention network using EEG
title_fullStr Music emotion recognition based on temporal convolutional attention network using EEG
title_full_unstemmed Music emotion recognition based on temporal convolutional attention network using EEG
title_short Music emotion recognition based on temporal convolutional attention network using EEG
title_sort music emotion recognition based on temporal convolutional attention network using eeg
topic EEG
music emotion recognition
CNN
BiLSTM
self-attention
url https://www.frontiersin.org/articles/10.3389/fnhum.2024.1324897/full
work_keys_str_mv AT yinghaoqiao musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT yinghaoqiao musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT yinghaoqiao musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT jiajiamu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT jiajiamu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT jiajiamu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT jialanxie musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT jialanxie musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT jialanxie musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT binghuihu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT binghuihu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT binghuihu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT guangyuanliu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT guangyuanliu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg
AT guangyuanliu musicemotionrecognitionbasedontemporalconvolutionalattentionnetworkusingeeg