A Study on a Speech Emotion Recognition System with Effective Acoustic Features Using Deep Learning Algorithms
The goal of the human interface is to recognize the user’s emotional state precisely. In the speech emotion recognition study, the most important issue is the effective parallel use of the extraction of proper speech features and an appropriate classification engine. Well defined speech databases ar...
Main Authors: | Sung-Woo Byun, Seok-Pil Lee |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-02-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/11/4/1890 |
Similar Items
-
Multi-Modal Emotion Recognition Using Speech Features and Text-Embedding
by: Sung-Woo Byun, et al.
Published: (2021-08-01) -
Design of a Multi-Condition Emotional Speech Synthesizer
by: Sung-Woo Byun, et al.
Published: (2021-01-01) -
Emotion Recognition in Speech Using Neural Network
by: Fatin B. Sofia, et al.
Published: (2008-03-01) -
Two-Way Feature Extraction for Speech Emotion Recognition Using Deep Learning
by: Apeksha Aggarwal, et al.
Published: (2022-03-01) -
Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features
by: Jennifer Santoso, et al.
Published: (2022-01-01)