Gaze-Based Interaction Intention Recognition in Virtual Reality

With the increasing need for eye tracking in head-mounted virtual reality displays, the gaze-based modality has the potential to predict user intention and unlock intuitive new interaction schemes. In the present work, we explore whether gaze-based data and hand-eye coordination data can predict a u...

Full description

Bibliographic Details
Main Authors: Xiao-Lin Chen, Wen-Jun Hou
Format: Article
Language:English
Published: MDPI AG 2022-05-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/11/10/1647
Description
Summary:With the increasing need for eye tracking in head-mounted virtual reality displays, the gaze-based modality has the potential to predict user intention and unlock intuitive new interaction schemes. In the present work, we explore whether gaze-based data and hand-eye coordination data can predict a user’s interaction intention with the digital world, which could be used to develop predictive interfaces. We validate it on the eye-tracking data collected from 10 participants in item selection and teleporting tasks in virtual reality. We demonstrate successful prediction of the onset of item selection and teleporting with an 0.943 <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>F</mi><mn>1</mn></msub></semantics></math></inline-formula>-Score using a Gradient Boosting Decision Tree, which is the best among the four classifiers compared, while the model size of the Support Vector Machine is the smallest. It is also proven that hand-eye-coordination-related features can improve interaction intention recognition in virtual reality environments.
ISSN:2079-9292