A Deep Spatial and Temporal Aggregation Framework for Video-Based Facial Expression Recognition

Video-based facial expression recognition is a long-standing problem owing to a gap between visual features and emotions, difficulties in tracking the subtle movement of muscles and limited datasets. The key to solving this problem is to exploit effective features characterizing facial expression to...

Full description

Bibliographic Details
Main Authors: Xianzhang Pan, Guoliang Ying, Guodong Chen, Hongming Li, Wenshu Li
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8674456/
Description
Summary:Video-based facial expression recognition is a long-standing problem owing to a gap between visual features and emotions, difficulties in tracking the subtle movement of muscles and limited datasets. The key to solving this problem is to exploit effective features characterizing facial expression to perform facial expression recognition. We propose an effective framework to solve these problems. In our work, both spatial information and temporal information are utilized through the aggregation layer of a framework that fuses two state-of-the-art stream networks. We investigate different strategies for pooling across spatial information and temporal information. We find that it is effective to pool jointly across spatial information and temporal information for video-based facial expression recognition. Our framework is end-to-end trainable for whole-video recognition. In addressing the problem of facial recognition, the main contribution of this project is the design of a novel, trainable deep neural network framework that fuses spatial information and temporal information of video according to CNNs and LSTMs for pattern recognition. The experimental results on two public datasets, i.e., the RML and eNTERFACE05 databases, show that our framework outperforms previous state-of-the-art frameworks.
ISSN:2169-3536