HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality

© 2016 ACM. In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the use...

Full description

Bibliographic Details
Main Authors: Russell, Spencer, Dublon, Gershon, Paradiso, Joseph A.
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:English
Published: ACM 2021
Online Access:https://hdl.handle.net/1721.1/137076
_version_ 1826216302357774336
author Russell, Spencer
Dublon, Gershon
Paradiso, Joseph A.
author2 Massachusetts Institute of Technology. Media Laboratory
author_facet Massachusetts Institute of Technology. Media Laboratory
Russell, Spencer
Dublon, Gershon
Paradiso, Joseph A.
author_sort Russell, Spencer
collection MIT
description © 2016 ACM. In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to achieve seamless mixing of real and virtual sounds. To scale between indoor, urban, and natural environments, our system supports multi-scale location tracking, using finegrained (20-cm) Ultra-WideBand (UWB) radio tracking when in range of our infrastructure anchors and mobile GPS otherwise. In our tests, users were able to navigate through an AAR scene and pinpoint audio source locations down to 1m. We found that bone conduction is a viable technology for producing realistic spatial sound, and show that users' audio localization ability is considerably better in UWB coverage zones than with GPS alone. HearThere is a major step towards realizing our vision of networked sensory prosthetics, in which sensor networks serve as collective sensory extensions into the world around us. In our vision, AAR would be used to mix spatialized data sonification with distributed, livestreaming microphones. In this concept, HearThere promises a more expansive perceptual world, or umwelt, where sensor data becomes immediately attributable to extrinsic phenomena, externalized in the wearer's perception. We are motivated by two goals: first, to remedy a fractured state of attention caused by existing mobile and wearable technologies; and second, to bring the distant or often invisible processes underpinning a complex natural environment more directly into human consciousness.
first_indexed 2024-09-23T16:45:34Z
format Article
id mit-1721.1/137076
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T16:45:34Z
publishDate 2021
publisher ACM
record_format dspace
spelling mit-1721.1/1370762023-01-10T16:36:30Z HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality Networked Sensory Prosthetics Through Auditory Augmented Reality Russell, Spencer Dublon, Gershon Paradiso, Joseph A. Massachusetts Institute of Technology. Media Laboratory © 2016 ACM. In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to achieve seamless mixing of real and virtual sounds. To scale between indoor, urban, and natural environments, our system supports multi-scale location tracking, using finegrained (20-cm) Ultra-WideBand (UWB) radio tracking when in range of our infrastructure anchors and mobile GPS otherwise. In our tests, users were able to navigate through an AAR scene and pinpoint audio source locations down to 1m. We found that bone conduction is a viable technology for producing realistic spatial sound, and show that users' audio localization ability is considerably better in UWB coverage zones than with GPS alone. HearThere is a major step towards realizing our vision of networked sensory prosthetics, in which sensor networks serve as collective sensory extensions into the world around us. In our vision, AAR would be used to mix spatialized data sonification with distributed, livestreaming microphones. In this concept, HearThere promises a more expansive perceptual world, or umwelt, where sensor data becomes immediately attributable to extrinsic phenomena, externalized in the wearer's perception. We are motivated by two goals: first, to remedy a fractured state of attention caused by existing mobile and wearable technologies; and second, to bring the distant or often invisible processes underpinning a complex natural environment more directly into human consciousness. 2021-11-02T13:49:05Z 2021-11-02T13:49:05Z 2016-02-25 2019-07-24T17:12:28Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137076 Russell, Spencer, Dublon, Gershon and Paradiso, Joseph A. 2016. "HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality." en 10.1145/2875194.2875247 Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf ACM MIT web domain
spellingShingle Russell, Spencer
Dublon, Gershon
Paradiso, Joseph A.
HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality
title HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality
title_full HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality
title_fullStr HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality
title_full_unstemmed HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality
title_short HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality
title_sort hearthere networked sensory prosthetics through auditory augmented reality
url https://hdl.handle.net/1721.1/137076
work_keys_str_mv AT russellspencer heartherenetworkedsensoryprostheticsthroughauditoryaugmentedreality
AT dublongershon heartherenetworkedsensoryprostheticsthroughauditoryaugmentedreality
AT paradisojosepha heartherenetworkedsensoryprostheticsthroughauditoryaugmentedreality
AT russellspencer networkedsensoryprostheticsthroughauditoryaugmentedreality
AT dublongershon networkedsensoryprostheticsthroughauditoryaugmentedreality
AT paradisojosepha networkedsensoryprostheticsthroughauditoryaugmentedreality