Strength Is in Numbers: Can Concordant Artificial Listeners Improve Prediction of Emotion from Speech?
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. Albeit numerous applications exist that enable machines to read facial emotions and recognize the content of verbal messages, methods for speech emotion recognition are still in their infancy. Yet, fas...
Main Authors: | Eugenio Martinelli, Arianna Mencattini, Elena Daprati, Corrado Di Natale |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2016-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC5001724?pdf=render |
Similar Items
-
Assembloid learning: opportunities and challenges for personalized approaches to brain functioning in health and disease
by: Arianna Mencattini, et al.
Published: (2024-04-01) -
From Petri Dishes to Organ on Chip Platform: The Increasing Importance of Machine Learning and Image Analysis
by: Arianna Mencattini, et al.
Published: (2019-02-01) -
Online Feature Selection for Robust Classification of the Microbiological Quality of Traditional Vanilla Cream by Means of Multispectral Imaging
by: Alexandra Lianou, et al.
Published: (2019-09-01) -
Perception of Emotion in Conversational Speech by Younger and Older Listeners
by: Juliane eSchmidt, et al.
Published: (2016-05-01) -
Human Listeners Can Accurately Judge Strength and Height Relative to Self from Aggressive Roars and Speech
by: Jordan Raine, et al.
Published: (2018-06-01)