Personalized Estimation of Engagement From Videos Using Active Learning With Deep Reinforcement Learning

© 2019 IEEE. Perceiving users' engagement accurately is important for technologies that need to respond to learners in a natural and intelligent way. In this paper, we address the problem of automated estimation of engagement from videos of child-robot interactions recorded in unconstrained env...

Full description

Bibliographic Details
Main Authors: Rudovic, Ognjen, Park, Hae Won, Busche, John, Schuller, Bjorn, Breazeal, Cynthia, Picard, Rosalind W.
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2021
Online Access:https://hdl.handle.net/1721.1/137137
Description
Summary:© 2019 IEEE. Perceiving users' engagement accurately is important for technologies that need to respond to learners in a natural and intelligent way. In this paper, we address the problem of automated estimation of engagement from videos of child-robot interactions recorded in unconstrained environments (kindergartens). This is challenging due to diverse and person-specific styles of engagement expressions through facial and body gestures, as well as because of illumination changes, partial occlusion, and a changing background in the classroom as each child is active. To tackle these difficult challenges, we propose a novel deep reinforcement learning architecture for active learning and estimation of engagement from video data. The key to our approach is the learning of a personalized policy that enables the model to decide whether to estimate the child's engagement level (low, medium, high) or, when uncertain, to query a human for a video label. Queried videos are labeled by a human expert in an offline manner, and used to personalize the policy and engagement classifier to a target child over time. We show on a database of 43 children involved in robot-assisted learning activities (8 sessions over 3 months), that this combined human-AI approach can easily adapt its interpretations of engagement to the target child using only a handful of labeled videos, while being robust to the many complex influences on the data. The results show large improvements over a non-personalized approach and over traditional active learning methods.