Speech emotion recognition using spectrogram based neural structured learning

Human emotions are extremely crucial in our daily life. Emotion analysis based solely on auditory data is difficult due to the lack of visible visual information on human faces. Thus, a unique emotion recognition system based on robust characteristics and machine learning from the audio speech is re...

Full description

Bibliographic Details
Main Authors: Sivan, Dawn, Haripriya, P. H., Jose, Rajan
Format: Conference or Workshop Item
Language:English
English
Published: Universiti Malaysia Pahang 2022
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/36833/1/Speech%20emotion%20recognition%20using%20spectrogram%20based%20neural%20structured%20learning.pdf
http://umpir.ump.edu.my/id/eprint/36833/7/Speech%20Emotion%20Recognition%20Using%20Spectrogram%20Based%20Neural%20Structured_FULL.pdf
Description
Summary:Human emotions are extremely crucial in our daily life. Emotion analysis based solely on auditory data is difficult due to the lack of visible visual information on human faces. Thus, a unique emotion recognition system based on robust characteristics and machine learning from the audio speech is reported in this paper. Audio details are used as input to the person-independent emotion recognition system, from which the spectrogram values are extracted as features. The generated features are then used to train and understand the emotions via Neural Structured Learning (NSL), a fast and accurate deep learning approach. During studies on an emotion dataset of audio speeches, the proposed approach of integrating spectrogram and NSL produced improved recognition rates compared to other known models. The system can be used in smart environments like homes or clinics to provide effective healthcare, music recommendations, customer support, and marketing, among several other things. As a result, rather than processing data and making judgments from far distant data sources, the decision-making could be made closer to where the data lives. The Toronto Emotional Speech Set (TESS) dataset that contains 7 emotions has been used for this research. The algorithm is successfully tested with the dataset with an accuracy of ~97%.