Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)

Background: Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in r...

Full description

Bibliographic Details
Main Authors: Grazia D’Onofrio, Laura Fiorini, Alessandra Sorrentino, Sergio Russo, Filomena Ciccone, Francesco Giuliani, Daniele Sancarlo, Filippo Cavallo
Format: Article
Language:English
Published: MDPI AG 2022-04-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/22/8/2861
_version_ 1797409393530109952
author Grazia D’Onofrio
Laura Fiorini
Alessandra Sorrentino
Sergio Russo
Filomena Ciccone
Francesco Giuliani
Daniele Sancarlo
Filippo Cavallo
author_facet Grazia D’Onofrio
Laura Fiorini
Alessandra Sorrentino
Sergio Russo
Filomena Ciccone
Francesco Giuliani
Daniele Sancarlo
Filippo Cavallo
author_sort Grazia D’Onofrio
collection DOAJ
description Background: Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in real time. The study aims were to identify if traditional machine learning algorithms could be used to assess every user emotions separately, to relate emotion recognizing in two robotic modalities: static or motion robot, and to evaluate the acceptability and usability of assistive robot from an end-user point of view. Methods: Twenty-seven hospital employees (M = 12; F = 15) were recruited to perform the experiment showing 60 positive, negative, or neutral images selected in the International Affective Picture System (IAPS) database. The experiment was performed with the Pepper robot. Concerning experimental phase with Pepper in active mode, a concordant mimicry was programmed based on types of images (positive, negative, and neutral). During the experimentation, the images were shown by a tablet on robot chest and a web interface lasting 7 s for each slide. For each image, the participants were asked to perform a subjective assessment of the perceived emotional experience using the Self-Assessment Manikin (SAM). After participants used robotic solution, Almere model questionnaire (AMQ) and system usability scale (SUS) were administered to assess acceptability, usability, and functionality of robotic solution. Analysis wasperformed on video recordings. The evaluation of three types of attitude (positive, negative, andneutral) wasperformed through two classification algorithms of machine learning: k-nearest neighbors (KNN) and random forest (RF). Results: According to the analysis of emotions performed on the recorded videos, RF algorithm performance wasbetter in terms of accuracy (mean ± sd = 0.98 ± 0.01) and execution time (mean ± sd = 5.73 ± 0.86 s) than KNN algorithm. By RF algorithm, all neutral, positive and negative attitudes had an equal and high precision (mean = 0.98) and F-measure (mean = 0.98). Most of the participants confirmed a high level of usability and acceptability of the robotic solution. Conclusions: RF algorithm performance was better in terms of accuracy and execution time than KNN algorithm. The robot was not a disturbing factor in the arousal of emotions.
first_indexed 2024-03-09T04:13:58Z
format Article
id doaj.art-56d097312a0645a999d1939a95344ea4
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-09T04:13:58Z
publishDate 2022-04-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-56d097312a0645a999d1939a95344ea42023-12-03T13:56:33ZengMDPI AGSensors1424-82202022-04-01228286110.3390/s22082861Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)Grazia D’Onofrio0Laura Fiorini1Alessandra Sorrentino2Sergio Russo3Filomena Ciccone4Francesco Giuliani5Daniele Sancarlo6Filippo Cavallo7Clinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, ItalyDepartment of Industrial Engineering, University of Florence, 50121 Florence, ItalyDepartment of Industrial Engineering, University of Florence, 50121 Florence, ItalyInformation and Communication Technology, Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, ItalyClinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, ItalyInformation and Communication Technology, Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, ItalyComplex Unit of Geriatrics, Department of Medical Sciences, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, ItalyDepartment of Industrial Engineering, University of Florence, 50121 Florence, ItalyBackground: Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in real time. The study aims were to identify if traditional machine learning algorithms could be used to assess every user emotions separately, to relate emotion recognizing in two robotic modalities: static or motion robot, and to evaluate the acceptability and usability of assistive robot from an end-user point of view. Methods: Twenty-seven hospital employees (M = 12; F = 15) were recruited to perform the experiment showing 60 positive, negative, or neutral images selected in the International Affective Picture System (IAPS) database. The experiment was performed with the Pepper robot. Concerning experimental phase with Pepper in active mode, a concordant mimicry was programmed based on types of images (positive, negative, and neutral). During the experimentation, the images were shown by a tablet on robot chest and a web interface lasting 7 s for each slide. For each image, the participants were asked to perform a subjective assessment of the perceived emotional experience using the Self-Assessment Manikin (SAM). After participants used robotic solution, Almere model questionnaire (AMQ) and system usability scale (SUS) were administered to assess acceptability, usability, and functionality of robotic solution. Analysis wasperformed on video recordings. The evaluation of three types of attitude (positive, negative, andneutral) wasperformed through two classification algorithms of machine learning: k-nearest neighbors (KNN) and random forest (RF). Results: According to the analysis of emotions performed on the recorded videos, RF algorithm performance wasbetter in terms of accuracy (mean ± sd = 0.98 ± 0.01) and execution time (mean ± sd = 5.73 ± 0.86 s) than KNN algorithm. By RF algorithm, all neutral, positive and negative attitudes had an equal and high precision (mean = 0.98) and F-measure (mean = 0.98). Most of the participants confirmed a high level of usability and acceptability of the robotic solution. Conclusions: RF algorithm performance was better in terms of accuracy and execution time than KNN algorithm. The robot was not a disturbing factor in the arousal of emotions.https://www.mdpi.com/1424-8220/22/8/2861human-robot interactionacceptabilitynon-verbal cues and expressivenessmonitoring of behaviorand internal states of humans
spellingShingle Grazia D’Onofrio
Laura Fiorini
Alessandra Sorrentino
Sergio Russo
Filomena Ciccone
Francesco Giuliani
Daniele Sancarlo
Filippo Cavallo
Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)
Sensors
human-robot interaction
acceptability
non-verbal cues and expressiveness
monitoring of behaviorand internal states of humans
title Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)
title_full Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)
title_fullStr Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)
title_full_unstemmed Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)
title_short Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)
title_sort emotion recognizing by a robotic solution initiative emotive project
topic human-robot interaction
acceptability
non-verbal cues and expressiveness
monitoring of behaviorand internal states of humans
url https://www.mdpi.com/1424-8220/22/8/2861
work_keys_str_mv AT graziadonofrio emotionrecognizingbyaroboticsolutioninitiativeemotiveproject
AT laurafiorini emotionrecognizingbyaroboticsolutioninitiativeemotiveproject
AT alessandrasorrentino emotionrecognizingbyaroboticsolutioninitiativeemotiveproject
AT sergiorusso emotionrecognizingbyaroboticsolutioninitiativeemotiveproject
AT filomenaciccone emotionrecognizingbyaroboticsolutioninitiativeemotiveproject
AT francescogiuliani emotionrecognizingbyaroboticsolutioninitiativeemotiveproject
AT danielesancarlo emotionrecognizingbyaroboticsolutioninitiativeemotiveproject
AT filippocavallo emotionrecognizingbyaroboticsolutioninitiativeemotiveproject