Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recur...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-07-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/24/8/1025 |
_version_ | 1797432225535361024 |
---|---|
author | Huawei Tao Lei Geng Shuai Shan Jingchao Mai Hongliang Fu |
author_facet | Huawei Tao Lei Geng Shuai Shan Jingchao Mai Hongliang Fu |
author_sort | Huawei Tao |
collection | DOAJ |
description | The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods. |
first_indexed | 2024-03-09T09:58:17Z |
format | Article |
id | doaj.art-5db0afd864794fd3863e4a54743aa53d |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-09T09:58:17Z |
publishDate | 2022-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-5db0afd864794fd3863e4a54743aa53d2023-12-01T23:38:48ZengMDPI AGEntropy1099-43002022-07-01248102510.3390/e24081025Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion RecognitionHuawei Tao0Lei Geng1Shuai Shan2Jingchao Mai3Hongliang Fu4College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, ChinaCollege of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, ChinaCollege of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, ChinaCollege of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, ChinaCollege of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, ChinaThe quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods.https://www.mdpi.com/1099-4300/24/8/1025speech emotion recognitionfeature extractionhybrid neural networkmulti-head attention mechanismfeature fusion |
spellingShingle | Huawei Tao Lei Geng Shuai Shan Jingchao Mai Hongliang Fu Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition Entropy speech emotion recognition feature extraction hybrid neural network multi-head attention mechanism feature fusion |
title | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_full | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_fullStr | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_full_unstemmed | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_short | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_sort | multi stream convolution recurrent neural networks based on attention mechanism fusion for speech emotion recognition |
topic | speech emotion recognition feature extraction hybrid neural network multi-head attention mechanism feature fusion |
url | https://www.mdpi.com/1099-4300/24/8/1025 |
work_keys_str_mv | AT huaweitao multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT leigeng multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT shuaishan multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT jingchaomai multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT hongliangfu multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition |