FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing

We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishi...

Full description

Bibliographic Details
Main Authors: Yao, Lining, DeVincenzi, Anthony, Pereira, Anna, Ishii, Hiroshi
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery (ACM) 2014
Online Access:http://hdl.handle.net/1721.1/92275
https://orcid.org/0000-0003-2791-434X
https://orcid.org/0000-0003-4918-8908
_version_ 1811082704471457792
author Yao, Lining
DeVincenzi, Anthony
Pereira, Anna
Ishii, Hiroshi
author2 Massachusetts Institute of Technology. Media Laboratory
author_facet Massachusetts Institute of Technology. Media Laboratory
Yao, Lining
DeVincenzi, Anthony
Pereira, Anna
Ishii, Hiroshi
author_sort Yao, Lining
collection MIT
description We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.
first_indexed 2024-09-23T12:07:36Z
format Article
id mit-1721.1/92275
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T12:07:36Z
publishDate 2014
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/922752022-09-28T00:18:20Z FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing Yao, Lining DeVincenzi, Anthony Pereira, Anna Ishii, Hiroshi Massachusetts Institute of Technology. Media Laboratory Program in Media Arts and Sciences (Massachusetts Institute of Technology) Yao, Lining DeVincenzi, Anthony Pereira, Anna Ishii, Hiroshi We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing. 2014-12-11T14:49:41Z 2014-12-11T14:49:41Z 2013-07 Article http://purl.org/eprint/type/ConferencePaper 9781450321419 http://hdl.handle.net/1721.1/92275 Lining Yao, Anthony DeVincenzi, Anna Pereira, and Hiroshi Ishii. 2013. FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing. In Proceedings of the 1st symposium on Spatial user interaction (SUI '13). ACM, New York, NY, USA, 73-76. https://orcid.org/0000-0003-2791-434X https://orcid.org/0000-0003-4918-8908 en_US http://dx.doi.org/10.1145/2491367.2491377 Proceedings of the 1st symposium on Spatial user interaction (SUI '13) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for Computing Machinery (ACM) MIT web domain
spellingShingle Yao, Lining
DeVincenzi, Anthony
Pereira, Anna
Ishii, Hiroshi
FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing
title FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing
title_full FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing
title_fullStr FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing
title_full_unstemmed FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing
title_short FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing
title_sort focalspace multimodal activity tracking synthetic blur and adapative presentation for video conferencing
url http://hdl.handle.net/1721.1/92275
https://orcid.org/0000-0003-2791-434X
https://orcid.org/0000-0003-4918-8908
work_keys_str_mv AT yaolining focalspacemultimodalactivitytrackingsyntheticblurandadapativepresentationforvideoconferencing
AT devincenzianthony focalspacemultimodalactivitytrackingsyntheticblurandadapativepresentationforvideoconferencing
AT pereiraanna focalspacemultimodalactivitytrackingsyntheticblurandadapativepresentationforvideoconferencing
AT ishiihiroshi focalspacemultimodalactivitytrackingsyntheticblurandadapativepresentationforvideoconferencing