Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild
Computer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2013
|
Online Access: | http://hdl.handle.net/1721.1/80733 https://orcid.org/0000-0002-5661-0022 |
_version_ | 1826209544075739136 |
---|---|
author | McDuff, Daniel Jonathan Senechal, Thibaud Amr, May Cohn, Jeffrey F. Picard, Rosalind W. El Kaliouby, Rana |
author2 | Massachusetts Institute of Technology. Media Laboratory |
author_facet | Massachusetts Institute of Technology. Media Laboratory McDuff, Daniel Jonathan Senechal, Thibaud Amr, May Cohn, Jeffrey F. Picard, Rosalind W. El Kaliouby, Rana |
author_sort | McDuff, Daniel Jonathan |
collection | MIT |
description | Computer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet. To collect the data, online viewers watched one of three intentionally amusing Super Bowl commercials and were simultaneously filmed using their webcam. They answered three self-report questions about their experience. A subset of viewers additionally gave consent for their data to be shared publicly with other researchers. This subset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The dataset is comprehensively labeled for the following: 1) frame-by-frame labels for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral) FACS action units, 2 head movements, smile, general expressiveness, feature tracker fails and gender; 2) the location of 22 automatically detected landmark points; 3) self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos and 4) baseline performance of detection algorithms on this dataset. This data is available for distribution to researchers online, the EULA can be found at: http://www.affectiva.com/facial-expression-dataset-am-fed/. |
first_indexed | 2024-09-23T14:24:09Z |
format | Article |
id | mit-1721.1/80733 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T14:24:09Z |
publishDate | 2013 |
publisher | Institute of Electrical and Electronics Engineers (IEEE) |
record_format | dspace |
spelling | mit-1721.1/807332022-10-01T21:06:09Z Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild McDuff, Daniel Jonathan Senechal, Thibaud Amr, May Cohn, Jeffrey F. Picard, Rosalind W. El Kaliouby, Rana Massachusetts Institute of Technology. Media Laboratory Program in Media Arts and Sciences (Massachusetts Institute of Technology) McDuff, Daniel Jonathan el Kaliouby, Rana Picard, Rosalind W. Computer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet. To collect the data, online viewers watched one of three intentionally amusing Super Bowl commercials and were simultaneously filmed using their webcam. They answered three self-report questions about their experience. A subset of viewers additionally gave consent for their data to be shared publicly with other researchers. This subset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The dataset is comprehensively labeled for the following: 1) frame-by-frame labels for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral) FACS action units, 2 head movements, smile, general expressiveness, feature tracker fails and gender; 2) the location of 22 automatically detected landmark points; 3) self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos and 4) baseline performance of detection algorithms on this dataset. This data is available for distribution to researchers online, the EULA can be found at: http://www.affectiva.com/facial-expression-dataset-am-fed/. 2013-09-16T13:12:16Z 2013-09-16T13:12:16Z 2013-06 Article http://purl.org/eprint/type/ConferencePaper 9780769549903 http://hdl.handle.net/1721.1/80733 McDuff, Daniel Jonathan; el Kaliouby, Rana; Senechal, Thibaud; Amr, May; Cohn, Jeffrey F.; Picard, Rosalind W. " Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild." Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2013. https://orcid.org/0000-0002-5661-0022 en_US Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) Creative Commons Attribution-Noncommercial-Share Alike 3.0 http://creativecommons.org/licenses/by-nc-sa/3.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) MIT Web Domain |
spellingShingle | McDuff, Daniel Jonathan Senechal, Thibaud Amr, May Cohn, Jeffrey F. Picard, Rosalind W. El Kaliouby, Rana Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild |
title | Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild |
title_full | Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild |
title_fullStr | Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild |
title_full_unstemmed | Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild |
title_short | Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild |
title_sort | affectiva mit facial expression dataset am fed naturalistic and spontaneous facial expressions collected in the wild |
url | http://hdl.handle.net/1721.1/80733 https://orcid.org/0000-0002-5661-0022 |
work_keys_str_mv | AT mcduffdanieljonathan affectivamitfacialexpressiondatasetamfednaturalisticandspontaneousfacialexpressionscollectedinthewild AT senechalthibaud affectivamitfacialexpressiondatasetamfednaturalisticandspontaneousfacialexpressionscollectedinthewild AT amrmay affectivamitfacialexpressiondatasetamfednaturalisticandspontaneousfacialexpressionscollectedinthewild AT cohnjeffreyf affectivamitfacialexpressiondatasetamfednaturalisticandspontaneousfacialexpressionscollectedinthewild AT picardrosalindw affectivamitfacialexpressiondatasetamfednaturalisticandspontaneousfacialexpressionscollectedinthewild AT elkalioubyrana affectivamitfacialexpressiondatasetamfednaturalisticandspontaneousfacialexpressionscollectedinthewild |