Content-Aware Eye Tracking for Autostereoscopic 3D Display

This study develops an eye tracking method for autostereoscopic three-dimensional (3D) display systems for use in various environments. The eye tracking-based autostereoscopic 3D display provides low crosstalk and high-resolution 3D image experience seamlessly without 3D eyeglasses by overcoming the...

Full description

Bibliographic Details
Main Authors: Dongwoo Kang, Jingu Heo
Format: Article
Language:English
Published: MDPI AG 2020-08-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/17/4787
_version_ 1827707750521503744
author Dongwoo Kang
Jingu Heo
author_facet Dongwoo Kang
Jingu Heo
author_sort Dongwoo Kang
collection DOAJ
description This study develops an eye tracking method for autostereoscopic three-dimensional (3D) display systems for use in various environments. The eye tracking-based autostereoscopic 3D display provides low crosstalk and high-resolution 3D image experience seamlessly without 3D eyeglasses by overcoming the viewing position restriction. However, accurate and fast eye position detection and tracking are still challenging, owing to the various light conditions, camera control, thick eyeglasses, eyeglass sunlight reflection, and limited system resources. This study presents a robust, automated algorithm and relevant systems for accurate and fast detection and tracking of eye pupil centers in 3D with a single visual camera and near-infrared (NIR) light emitting diodes (LEDs). Our proposed eye tracker consists of eye–nose detection, eye–nose shape keypoint alignment, a tracker checker, and tracking with NIR LED on/off control. Eye–nose detection generates facial subregion boxes, including the eyes and nose, which utilize an Error-Based Learning (EBL) method for the selection of the best learnt database (DB). After detection, the eye–nose shape alignment is processed by the Supervised Descent Method (SDM) with Scale-invariant Feature Transform (SIFT). The aligner is content-aware in the sense that corresponding designated aligners are applied based on image content classification, such as the various light conditions and wearing eyeglasses. The conducted experiments on real image DBs yield promising eye detection and tracking outcomes, even in the presence of challenging conditions.
first_indexed 2024-03-10T16:53:04Z
format Article
id doaj.art-901bf0cbe01447189bb6db3723e20236
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T16:53:04Z
publishDate 2020-08-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-901bf0cbe01447189bb6db3723e202362023-11-20T11:15:20ZengMDPI AGSensors1424-82202020-08-012017478710.3390/s20174787Content-Aware Eye Tracking for Autostereoscopic 3D DisplayDongwoo Kang0Jingu Heo1Multimedia Processing Lab, Samsung Advanced Institute of Technology, Suwon 16678, KoreaMultimedia Processing Lab, Samsung Advanced Institute of Technology, Suwon 16678, KoreaThis study develops an eye tracking method for autostereoscopic three-dimensional (3D) display systems for use in various environments. The eye tracking-based autostereoscopic 3D display provides low crosstalk and high-resolution 3D image experience seamlessly without 3D eyeglasses by overcoming the viewing position restriction. However, accurate and fast eye position detection and tracking are still challenging, owing to the various light conditions, camera control, thick eyeglasses, eyeglass sunlight reflection, and limited system resources. This study presents a robust, automated algorithm and relevant systems for accurate and fast detection and tracking of eye pupil centers in 3D with a single visual camera and near-infrared (NIR) light emitting diodes (LEDs). Our proposed eye tracker consists of eye–nose detection, eye–nose shape keypoint alignment, a tracker checker, and tracking with NIR LED on/off control. Eye–nose detection generates facial subregion boxes, including the eyes and nose, which utilize an Error-Based Learning (EBL) method for the selection of the best learnt database (DB). After detection, the eye–nose shape alignment is processed by the Supervised Descent Method (SDM) with Scale-invariant Feature Transform (SIFT). The aligner is content-aware in the sense that corresponding designated aligners are applied based on image content classification, such as the various light conditions and wearing eyeglasses. The conducted experiments on real image DBs yield promising eye detection and tracking outcomes, even in the presence of challenging conditions.https://www.mdpi.com/1424-8220/20/17/4787eye detectioneye trackingcontent-aware eye alignmenterror reinforcement learningautostereoscopic three-dimensional displayaugmented reality display
spellingShingle Dongwoo Kang
Jingu Heo
Content-Aware Eye Tracking for Autostereoscopic 3D Display
Sensors
eye detection
eye tracking
content-aware eye alignment
error reinforcement learning
autostereoscopic three-dimensional display
augmented reality display
title Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_full Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_fullStr Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_full_unstemmed Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_short Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_sort content aware eye tracking for autostereoscopic 3d display
topic eye detection
eye tracking
content-aware eye alignment
error reinforcement learning
autostereoscopic three-dimensional display
augmented reality display
url https://www.mdpi.com/1424-8220/20/17/4787
work_keys_str_mv AT dongwookang contentawareeyetrackingforautostereoscopic3ddisplay
AT jinguheo contentawareeyetrackingforautostereoscopic3ddisplay