Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays

Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that ar...

Full description

Bibliographic Details
Main Authors: Gillian, Nicholas Edward, Pfenninger, Sara, Paradiso, Joseph A., Russell, Spencer Franklin
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery (ACM) 2014
Online Access:http://hdl.handle.net/1721.1/92444
https://orcid.org/0000-0001-5664-6036
https://orcid.org/0000-0002-0719-7104
_version_ 1826194228259061760
author Gillian, Nicholas Edward
Pfenninger, Sara
Paradiso, Joseph A.
Russell, Spencer Franklin
author2 Massachusetts Institute of Technology. Media Laboratory
author_facet Massachusetts Institute of Technology. Media Laboratory
Gillian, Nicholas Edward
Pfenninger, Sara
Paradiso, Joseph A.
Russell, Spencer Franklin
author_sort Gillian, Nicholas Edward
collection MIT
description Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that are located in key public areas of the MIT Media Lab. Gestures Everywhere fuses the multimodal sensor data using radial basis function particle filters and performs real-time analysis on the aggregated data. This includes key spatio-temporal properties such as presence, location and identity; in addition to higher-level analysis including social clustering and gesture recognition. We describe the algorithms and architecture of our system and discuss the lessons learned from the systems deployment.
first_indexed 2024-09-23T09:52:53Z
format Article
id mit-1721.1/92444
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T09:52:53Z
publishDate 2014
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/924442022-09-30T17:25:05Z Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays Gillian, Nicholas Edward Pfenninger, Sara Paradiso, Joseph A. Russell, Spencer Franklin Massachusetts Institute of Technology. Media Laboratory Massachusetts Institute of Technology. Responsive Environments Group Program in Media Arts and Sciences (Massachusetts Institute of Technology) Gillian, Nicholas Edward Russell, Spencer Franklin Paradiso, Joseph A. Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that are located in key public areas of the MIT Media Lab. Gestures Everywhere fuses the multimodal sensor data using radial basis function particle filters and performs real-time analysis on the aggregated data. This includes key spatio-temporal properties such as presence, location and identity; in addition to higher-level analysis including social clustering and gesture recognition. We describe the algorithms and architecture of our system and discuss the lessons learned from the systems deployment. 2014-12-22T19:03:21Z 2014-12-22T19:03:21Z 2014-06 Article http://purl.org/eprint/type/ConferencePaper 9781450329521 http://hdl.handle.net/1721.1/92444 Nicholas Gillian, Sara Pfenninger, Spencer Russell, and Joseph A. Paradiso. 2014. Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays. In Proceedings of The International Symposium on Pervasive Displays (PerDis '14), Sven Gehring (Ed.). ACM, New York, NY, USA, Pages 98, 6 pages. https://orcid.org/0000-0001-5664-6036 https://orcid.org/0000-0002-0719-7104 en_US http://dx.doi.org/10.1145/2611009.2611032 Proceedings of The International Symposium on Pervasive Displays (PerDis '14) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for Computing Machinery (ACM) MIT web domain
spellingShingle Gillian, Nicholas Edward
Pfenninger, Sara
Paradiso, Joseph A.
Russell, Spencer Franklin
Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
title Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
title_full Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
title_fullStr Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
title_full_unstemmed Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
title_short Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
title_sort gestures everywhere a multimodal sensor fusion and analysis framework for pervasive displays
url http://hdl.handle.net/1721.1/92444
https://orcid.org/0000-0001-5664-6036
https://orcid.org/0000-0002-0719-7104
work_keys_str_mv AT gilliannicholasedward gestureseverywhereamultimodalsensorfusionandanalysisframeworkforpervasivedisplays
AT pfenningersara gestureseverywhereamultimodalsensorfusionandanalysisframeworkforpervasivedisplays
AT paradisojosepha gestureseverywhereamultimodalsensorfusionandanalysisframeworkforpervasivedisplays
AT russellspencerfranklin gestureseverywhereamultimodalsensorfusionandanalysisframeworkforpervasivedisplays