Multi-Modal Emotion Recognition Using Speech Features and Text-Embedding
Recently, intelligent personal assistants, chat-bots and AI speakers are being utilized more broadly as communication interfaces and the demands for more natural interaction measures have increased as well. Humans can express emotions in various ways, such as using voice tones or facial expressions;...
Main Authors: | Sung-Woo Byun, Ju-Hee Kim, Seok-Pil Lee |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-08-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/11/17/7967 |
Similar Items
-
A Study on a Speech Emotion Recognition System with Effective Acoustic Features Using Deep Learning Algorithms
by: Sung-Woo Byun, et al.
Published: (2021-02-01) -
Emotion Recognition in Speech Using Neural Network
by: Fatin B. Sofia, et al.
Published: (2008-03-01) -
Electroglottograph-Based Speech Emotion Recognition via Cross-Modal Distillation
by: Lijiang Chen, et al.
Published: (2022-04-01) -
Robust Multi-Scenario Speech-Based Emotion Recognition System
by: Fangfang Zhu-Zhou, et al.
Published: (2022-03-01) -
A novel dual-modal emotion recognition algorithm with fusing hybrid features of audio signal and speech context
by: Yurui Xu, et al.
Published: (2022-08-01)