Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that ar...
Main Authors: | Gillian, Nicholas Edward, Pfenninger, Sara, Paradiso, Joseph A., Russell, Spencer Franklin |
---|---|
Other Authors: | Massachusetts Institute of Technology. Media Laboratory |
Format: | Article |
Language: | en_US |
Published: |
Association for Computing Machinery (ACM)
2014
|
Online Access: | http://hdl.handle.net/1721.1/92444 https://orcid.org/0000-0001-5664-6036 https://orcid.org/0000-0002-0719-7104 |
Similar Items
-
The gesture recognition toolkit
by: Gillian, Nicholas, et al.
Published: (2016) -
Dynamic privacy management in pervasive sensor networks
by: Gong, Nan-wei, et al.
Published: (2011) -
LUI : a scalable, multimodal gesture- and voice-interface for large displays
by: Parthiban, Vikraman.
Published: (2020) -
Hypermedia APIs for Sensor Data: A pragmatic approach to the Web of Things
by: Russell, Spencer Franklin, et al.
Published: (2016) -
WristFlex: low-power gesture input with wrist-worn pressure sensors
by: Dementyev, Artem, et al.
Published: (2016)