Crowdsourcing facial responses to online videos: Extended abstract

Traditional observational research methods required an experimenter's presence in order to record videos of participants, and limited the scalability of data collection to typically less than a few hundred people in a single location. In order to make a significant leap forward in affective exp...

Full description

Bibliographic Details
Main Authors: McDuff, Daniel, el Kaliouby, Rana, Picard, Rosalind W.
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2017
Online Access:http://hdl.handle.net/1721.1/110774
https://orcid.org/0000-0002-5661-0022
_version_ 1826191925094383616
author McDuff, Daniel
el Kaliouby, Rana
Picard, Rosalind W.
author2 Massachusetts Institute of Technology. Media Laboratory
author_facet Massachusetts Institute of Technology. Media Laboratory
McDuff, Daniel
el Kaliouby, Rana
Picard, Rosalind W.
author_sort McDuff, Daniel
collection MIT
description Traditional observational research methods required an experimenter's presence in order to record videos of participants, and limited the scalability of data collection to typically less than a few hundred people in a single location. In order to make a significant leap forward in affective expression data collection and the insights based on it, our work has created and validated a novel framework for collecting and analyzing facial responses over the Internet. The first experiment using this framework enabled 3,268 trackable face videos to be collected and analyzed in under two months. Each participant viewed one or more commercials while their facial response was recorded and analyzed. Our data showed significantly different intensity and dynamics patterns of smile responses between subgroups who reported liking the commercials versus those who did not. Since this framework appeared in 2011, we have collected over three million videos of facial responses in over 75 countries using this same methodology, enabling facial analytics to become significantly more accurate and validated across five continents. Many new insights have been discovered based on crowd-sourced facial data, enabling Internet-based measurement of facial responses to become reliable and proven. We are now able to provide large-scale evidence for gender, cultural and age differences in behaviors. Today such methods are used as part of standard practice in industry for copy-testing advertisements and are increasingly used for online media evaluations, distance learning, and mobile applications.
first_indexed 2024-09-23T09:03:26Z
format Article
id mit-1721.1/110774
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T09:03:26Z
publishDate 2017
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1107742022-09-30T13:09:36Z Crowdsourcing facial responses to online videos: Extended abstract McDuff, Daniel el Kaliouby, Rana Picard, Rosalind W. Massachusetts Institute of Technology. Media Laboratory Program in Media Arts and Sciences (Massachusetts Institute of Technology) Picard, Rosalind W. Traditional observational research methods required an experimenter's presence in order to record videos of participants, and limited the scalability of data collection to typically less than a few hundred people in a single location. In order to make a significant leap forward in affective expression data collection and the insights based on it, our work has created and validated a novel framework for collecting and analyzing facial responses over the Internet. The first experiment using this framework enabled 3,268 trackable face videos to be collected and analyzed in under two months. Each participant viewed one or more commercials while their facial response was recorded and analyzed. Our data showed significantly different intensity and dynamics patterns of smile responses between subgroups who reported liking the commercials versus those who did not. Since this framework appeared in 2011, we have collected over three million videos of facial responses in over 75 countries using this same methodology, enabling facial analytics to become significantly more accurate and validated across five continents. Many new insights have been discovered based on crowd-sourced facial data, enabling Internet-based measurement of facial responses to become reliable and proven. We are now able to provide large-scale evidence for gender, cultural and age differences in behaviors. Today such methods are used as part of standard practice in industry for copy-testing advertisements and are increasingly used for online media evaluations, distance learning, and mobile applications. 2017-07-19T14:21:13Z 2017-07-19T14:21:13Z 2015-12 2015-09 Article http://purl.org/eprint/type/ConferencePaper 978-1-4799-9953-8 http://hdl.handle.net/1721.1/110774 McDuff, Daniel, Rana el Kaliouby, and Rosalind W. Picard. “Crowdsourcing Facial Responses to Online Videos: Extended Abstract.” 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi'an, China, 21-24 September, 2015. IEEE, 2015. 512–518. https://orcid.org/0000-0002-5661-0022 en_US http://dx.doi.org/10.1109/ACII.2015.7344618 2015 International Conference on Affective Computing and Intelligent Interaction (ACII) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) MIT web domain
spellingShingle McDuff, Daniel
el Kaliouby, Rana
Picard, Rosalind W.
Crowdsourcing facial responses to online videos: Extended abstract
title Crowdsourcing facial responses to online videos: Extended abstract
title_full Crowdsourcing facial responses to online videos: Extended abstract
title_fullStr Crowdsourcing facial responses to online videos: Extended abstract
title_full_unstemmed Crowdsourcing facial responses to online videos: Extended abstract
title_short Crowdsourcing facial responses to online videos: Extended abstract
title_sort crowdsourcing facial responses to online videos extended abstract
url http://hdl.handle.net/1721.1/110774
https://orcid.org/0000-0002-5661-0022
work_keys_str_mv AT mcduffdaniel crowdsourcingfacialresponsestoonlinevideosextendedabstract
AT elkalioubyrana crowdsourcingfacialresponsestoonlinevideosextendedabstract
AT picardrosalindw crowdsourcingfacialresponsestoonlinevideosextendedabstract