Quantifying the efficacy of an automated facial coding software using videos of parents

IntroductionThis work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding.MethodsWe used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos—o...

Full description

Bibliographic Details
Main Authors: R. Burgess, I. Culpin, I. Costantini, H. Bould, I. Nabney, R. M. Pearson
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-07-01
Series:Frontiers in Psychology
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1223806/full
_version_ 1797768488138309632
author R. Burgess
I. Culpin
I. Culpin
I. Costantini
H. Bould
H. Bould
H. Bould
I. Nabney
R. M. Pearson
R. M. Pearson
author_facet R. Burgess
I. Culpin
I. Culpin
I. Costantini
H. Bould
H. Bould
H. Bould
I. Nabney
R. M. Pearson
R. M. Pearson
author_sort R. Burgess
collection DOAJ
description IntroductionThis work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding.MethodsWe used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos—obtained during real-life parent-infant interactions in the home—were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software’s detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy.ResultsWe found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers’ faces were more important for predicting Positive and Neutral expressions, whilst fathers’ faces were more important in predicting Negative and Surprise expressions.DiscussionWe discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research.
first_indexed 2024-03-12T20:54:22Z
format Article
id doaj.art-757e9da855474610af5223bae2065dcc
institution Directory Open Access Journal
issn 1664-1078
language English
last_indexed 2024-03-12T20:54:22Z
publishDate 2023-07-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Psychology
spelling doaj.art-757e9da855474610af5223bae2065dcc2023-07-31T21:32:43ZengFrontiers Media S.A.Frontiers in Psychology1664-10782023-07-011410.3389/fpsyg.2023.12238061223806Quantifying the efficacy of an automated facial coding software using videos of parentsR. Burgess0I. Culpin1I. Culpin2I. Costantini3H. Bould4H. Bould5H. Bould6I. Nabney7R. M. Pearson8R. M. Pearson9The Digital Health Engineering Group, Merchant Venturers Building, University of Bristol, Bristol, United KingdomThe Centre for Academic Mental Health, Bristol Medical School, Bristol, United KingdomFlorence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King’s College London, London, United KingdomThe Centre for Academic Mental Health, Bristol Medical School, Bristol, United KingdomThe Centre for Academic Mental Health, Bristol Medical School, Bristol, United KingdomThe Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol, United KingdomThe Gloucestershire Health and Care NHS Foundation Trust, Gloucester, United KingdomThe Digital Health Engineering Group, Merchant Venturers Building, University of Bristol, Bristol, United KingdomThe Centre for Academic Mental Health, Bristol Medical School, Bristol, United KingdomThe Department of Psychology, Manchester Metropolitan University, Manchester, United KingdomIntroductionThis work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding.MethodsWe used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos—obtained during real-life parent-infant interactions in the home—were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software’s detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy.ResultsWe found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers’ faces were more important for predicting Positive and Neutral expressions, whilst fathers’ faces were more important in predicting Negative and Surprise expressions.DiscussionWe discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research.https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1223806/fullautomated facial codingFaceReaderfacial expressionsparentingALSPAC
spellingShingle R. Burgess
I. Culpin
I. Culpin
I. Costantini
H. Bould
H. Bould
H. Bould
I. Nabney
R. M. Pearson
R. M. Pearson
Quantifying the efficacy of an automated facial coding software using videos of parents
Frontiers in Psychology
automated facial coding
FaceReader
facial expressions
parenting
ALSPAC
title Quantifying the efficacy of an automated facial coding software using videos of parents
title_full Quantifying the efficacy of an automated facial coding software using videos of parents
title_fullStr Quantifying the efficacy of an automated facial coding software using videos of parents
title_full_unstemmed Quantifying the efficacy of an automated facial coding software using videos of parents
title_short Quantifying the efficacy of an automated facial coding software using videos of parents
title_sort quantifying the efficacy of an automated facial coding software using videos of parents
topic automated facial coding
FaceReader
facial expressions
parenting
ALSPAC
url https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1223806/full
work_keys_str_mv AT rburgess quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT iculpin quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT iculpin quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT icostantini quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT hbould quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT hbould quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT hbould quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT inabney quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT rmpearson quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents
AT rmpearson quantifyingtheefficacyofanautomatedfacialcodingsoftwareusingvideosofparents