Learning a sparse codebook of facial and body microexpressions for emotion recognition

Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there a...

Full description

Bibliographic Details
Main Authors: Song, Yale, Morency, Louis-Philippe, Davis, Randall
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery (ACM) 2014
Online Access:http://hdl.handle.net/1721.1/86124
https://orcid.org/0000-0001-5232-7281
_version_ 1826201797040013312
author Song, Yale
Morency, Louis-Philippe
Davis, Randall
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Song, Yale
Morency, Louis-Philippe
Davis, Randall
author_sort Song, Yale
collection MIT
description Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.
first_indexed 2024-09-23T11:57:21Z
format Article
id mit-1721.1/86124
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T11:57:21Z
publishDate 2014
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/861242022-09-27T23:03:26Z Learning a sparse codebook of facial and body microexpressions for emotion recognition Song, Yale Morency, Louis-Philippe Davis, Randall Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Song, Yale Davis, Randall Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques. United States. Office of Naval Research (N000140910625) National Science Foundation (U.S.) (IIS-1018055) National Science Foundation (U.S.) (IIS-1118018) United States. Army Research, Development, and Engineering Command 2014-04-11T18:49:44Z 2014-04-11T18:49:44Z 2013-12 Article http://purl.org/eprint/type/ConferencePaper 9781450321297 http://hdl.handle.net/1721.1/86124 Yale Song, Louis-Philippe Morency, and Randall Davis. 2013. Learning a sparse codebook of facial and body microexpressions for emotion recognition. In Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13). ACM, New York, NY, USA, 237-244. https://orcid.org/0000-0001-5232-7281 en_US http://dx.doi.org/10.1145/2522848.2522851 Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for Computing Machinery (ACM) MIT web domain
spellingShingle Song, Yale
Morency, Louis-Philippe
Davis, Randall
Learning a sparse codebook of facial and body microexpressions for emotion recognition
title Learning a sparse codebook of facial and body microexpressions for emotion recognition
title_full Learning a sparse codebook of facial and body microexpressions for emotion recognition
title_fullStr Learning a sparse codebook of facial and body microexpressions for emotion recognition
title_full_unstemmed Learning a sparse codebook of facial and body microexpressions for emotion recognition
title_short Learning a sparse codebook of facial and body microexpressions for emotion recognition
title_sort learning a sparse codebook of facial and body microexpressions for emotion recognition
url http://hdl.handle.net/1721.1/86124
https://orcid.org/0000-0001-5232-7281
work_keys_str_mv AT songyale learningasparsecodebookoffacialandbodymicroexpressionsforemotionrecognition
AT morencylouisphilippe learningasparsecodebookoffacialandbodymicroexpressionsforemotionrecognition
AT davisrandall learningasparsecodebookoffacialandbodymicroexpressionsforemotionrecognition