Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer

Speech emotion recognition (SER) is a challenging task in human–computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not...

Full description

Bibliographic Details
Main Authors: Rizwan Ullah, Muhammad Asif, Wahab Ali Shah, Fakhar Anjam, Ibrar Ullah, Tahir Khurshaid, Lunchakorn Wuttisittikulkij, Shashi Shah, Syed Mansoor Ali, Mohammad Alibakhshikenari
Format: Article
Language:English
Published: MDPI AG 2023-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/13/6212
_version_ 1827734433102299136
author Rizwan Ullah
Muhammad Asif
Wahab Ali Shah
Fakhar Anjam
Ibrar Ullah
Tahir Khurshaid
Lunchakorn Wuttisittikulkij
Shashi Shah
Syed Mansoor Ali
Mohammad Alibakhshikenari
author_facet Rizwan Ullah
Muhammad Asif
Wahab Ali Shah
Fakhar Anjam
Ibrar Ullah
Tahir Khurshaid
Lunchakorn Wuttisittikulkij
Shashi Shah
Syed Mansoor Ali
Mohammad Alibakhshikenari
author_sort Rizwan Ullah
collection DOAJ
description Speech emotion recognition (SER) is a challenging task in human–computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models.
first_indexed 2024-03-11T01:28:03Z
format Article
id doaj.art-13b86cbc2e2b43c9a60951f461cf3c5c
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-11T01:28:03Z
publishDate 2023-07-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-13b86cbc2e2b43c9a60951f461cf3c5c2023-11-18T17:32:14ZengMDPI AGSensors1424-82202023-07-012313621210.3390/s23136212Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional TransformerRizwan Ullah0Muhammad Asif1Wahab Ali Shah2Fakhar Anjam3Ibrar Ullah4Tahir Khurshaid5Lunchakorn Wuttisittikulkij6Shashi Shah7Syed Mansoor Ali8Mohammad Alibakhshikenari9Wireless Communication Ecosystem Research Unit, Department of Electrical Engineering, Chulalongkorn University, Bangkok 10330, ThailandDepartment of Electrical Engineering, Main Campus, University of Science & Technology, Bannu 28100, PakistanDepartment of Electrical Engineering, Namal University, Mianwali 42250, PakistanDepartment of Electrical Engineering, Main Campus, University of Science & Technology, Bannu 28100, PakistanDepartment of Electrical Engineering, Kohat Campus, University of Engineering and Technology Peshawar, Kohat 25000, PakistanDepartment of Electrical Engineering, Yeungnam University, Gyeongsan 38541, Republic of KoreaWireless Communication Ecosystem Research Unit, Department of Electrical Engineering, Chulalongkorn University, Bangkok 10330, ThailandWireless Communication Ecosystem Research Unit, Department of Electrical Engineering, Chulalongkorn University, Bangkok 10330, ThailandDepartment of Physics and Astronomy, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi ArabiaDepartment of Signal Theory and Communications, Universidad Carlos III de Madrid, Leganés, 28911 Madrid, SpainSpeech emotion recognition (SER) is a challenging task in human–computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models.https://www.mdpi.com/1424-8220/23/13/6212speech emotion recognitionconvolutional neural networksconvolutional Transformer encodermulti-head attentionspatial featurestemporal features
spellingShingle Rizwan Ullah
Muhammad Asif
Wahab Ali Shah
Fakhar Anjam
Ibrar Ullah
Tahir Khurshaid
Lunchakorn Wuttisittikulkij
Shashi Shah
Syed Mansoor Ali
Mohammad Alibakhshikenari
Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
Sensors
speech emotion recognition
convolutional neural networks
convolutional Transformer encoder
multi-head attention
spatial features
temporal features
title Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
title_full Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
title_fullStr Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
title_full_unstemmed Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
title_short Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
title_sort speech emotion recognition using convolution neural networks and multi head convolutional transformer
topic speech emotion recognition
convolutional neural networks
convolutional Transformer encoder
multi-head attention
spatial features
temporal features
url https://www.mdpi.com/1424-8220/23/13/6212
work_keys_str_mv AT rizwanullah speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT muhammadasif speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT wahabalishah speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT fakharanjam speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT ibrarullah speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT tahirkhurshaid speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT lunchakornwuttisittikulkij speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT shashishah speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT syedmansoorali speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer
AT mohammadalibakhshikenari speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer