T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation

T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching,...

Full description

Bibliographic Details
Main Authors: Perlin, Ken, Lakatos, David, Blackshaw, Matthew Andrew, Okot-Olwal, Alex Wilson Alphonse, Barryte, Zachary B., Ishii, Hiroshi
Other Authors: Massachusetts Institute of Technology. Media Laboratory. Tangible Media Group
Format: Article
Language:en_US
Published: 2017
Online Access:http://hdl.handle.net/1721.1/109399
https://orcid.org/0000-0003-0520-4638
https://orcid.org/0000-0003-4918-8908
Description
Summary:T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.