Domain adaptation for driver's gaze mapping for different drivers and new environments

Distracted driving is a leading cause of traffic accidents, and often arises from a lack of visual attention on the road. To enhance road safety, monitoring a driver's visual attention is crucial. Appearance-based gaze estimation using deep learning and Convolutional Neural Networks (CNN) has s...

Full description

Bibliographic Details
Main Authors: Ulziibayar Sonom-Ochir, Stephen Karungaru, Kenji Terada, Altangerel Ayush
Format: Article
Language:English
Published: Universitas Ahmad Dahlan 2024-02-01
Series:IJAIN (International Journal of Advances in Intelligent Informatics)
Subjects:
Online Access:http://ijain.org/index.php/IJAIN/article/view/1168
_version_ 1827323341747257344
author Ulziibayar Sonom-Ochir
Stephen Karungaru
Kenji Terada
Altangerel Ayush
author_facet Ulziibayar Sonom-Ochir
Stephen Karungaru
Kenji Terada
Altangerel Ayush
author_sort Ulziibayar Sonom-Ochir
collection DOAJ
description Distracted driving is a leading cause of traffic accidents, and often arises from a lack of visual attention on the road. To enhance road safety, monitoring a driver's visual attention is crucial. Appearance-based gaze estimation using deep learning and Convolutional Neural Networks (CNN) has shown promising results, but it faces challenges when applied to different drivers and environments. In this paper, we propose a domain adaptation-based solution for gaze mapping, which aims to accurately estimate a driver's gaze in diverse drivers and new environments. Our method consists of three steps: pre-processing, facial feature extraction, and gaze region classification. We explore two strategies for input feature extraction, one utilizing the full appearance of the driver and environment and the other focusing on the driver's face. Through unsupervised domain adaptation, we align the feature distributions of the source and target domains using a conditional Generative Adversarial Network (GAN). We conduct experiments on the Driver Gaze Mapping (DGM) dataset and the Columbia Cave-DB dataset to evaluate the performance of our method. The results demonstrate that our proposed method reduces the gaze mapping error, achieves better performance on different drivers and camera positions, and outperforms existing methods. We achieved an average Strictly Correct Estimation Rate (SCER) accuracy of 81.38% and 93.53% and Loosely Correct Estimation Rate (LCER) accuracy of 96.69% and 98.9% for the two strategies, respectively, indicating the effectiveness of our approach in adapting to different domains and camera positions. Our study contributes to the advancement of gaze mapping techniques and provides insights for improving driver safety in various driving scenarios.
first_indexed 2024-04-25T01:43:24Z
format Article
id doaj.art-9ad438e341534ceb808acb31975e3b62
institution Directory Open Access Journal
issn 2442-6571
2548-3161
language English
last_indexed 2024-04-25T01:43:24Z
publishDate 2024-02-01
publisher Universitas Ahmad Dahlan
record_format Article
series IJAIN (International Journal of Advances in Intelligent Informatics)
spelling doaj.art-9ad438e341534ceb808acb31975e3b622024-03-08T03:14:05ZengUniversitas Ahmad DahlanIJAIN (International Journal of Advances in Intelligent Informatics)2442-65712548-31612024-02-011019410810.26555/ijain.v10i1.1168283Domain adaptation for driver's gaze mapping for different drivers and new environmentsUlziibayar Sonom-Ochir0Stephen Karungaru1Kenji Terada2Altangerel Ayush3Department of Information Science and Intelligent Systems, Tokushima UniversityDepartment of Information Science and Intelligent Systems,Tokushima UniversityDepartment of Information Science and Intelligent Systems,Tokushima UniversityDepartment of Information Technology, Mongolian University of Science and TechnologyDistracted driving is a leading cause of traffic accidents, and often arises from a lack of visual attention on the road. To enhance road safety, monitoring a driver's visual attention is crucial. Appearance-based gaze estimation using deep learning and Convolutional Neural Networks (CNN) has shown promising results, but it faces challenges when applied to different drivers and environments. In this paper, we propose a domain adaptation-based solution for gaze mapping, which aims to accurately estimate a driver's gaze in diverse drivers and new environments. Our method consists of three steps: pre-processing, facial feature extraction, and gaze region classification. We explore two strategies for input feature extraction, one utilizing the full appearance of the driver and environment and the other focusing on the driver's face. Through unsupervised domain adaptation, we align the feature distributions of the source and target domains using a conditional Generative Adversarial Network (GAN). We conduct experiments on the Driver Gaze Mapping (DGM) dataset and the Columbia Cave-DB dataset to evaluate the performance of our method. The results demonstrate that our proposed method reduces the gaze mapping error, achieves better performance on different drivers and camera positions, and outperforms existing methods. We achieved an average Strictly Correct Estimation Rate (SCER) accuracy of 81.38% and 93.53% and Loosely Correct Estimation Rate (LCER) accuracy of 96.69% and 98.9% for the two strategies, respectively, indicating the effectiveness of our approach in adapting to different domains and camera positions. Our study contributes to the advancement of gaze mapping techniques and provides insights for improving driver safety in various driving scenarios.http://ijain.org/index.php/IJAIN/article/view/1168gaze mappingdomain adaptationvisual attentiongaze regions
spellingShingle Ulziibayar Sonom-Ochir
Stephen Karungaru
Kenji Terada
Altangerel Ayush
Domain adaptation for driver's gaze mapping for different drivers and new environments
IJAIN (International Journal of Advances in Intelligent Informatics)
gaze mapping
domain adaptation
visual attention
gaze regions
title Domain adaptation for driver's gaze mapping for different drivers and new environments
title_full Domain adaptation for driver's gaze mapping for different drivers and new environments
title_fullStr Domain adaptation for driver's gaze mapping for different drivers and new environments
title_full_unstemmed Domain adaptation for driver's gaze mapping for different drivers and new environments
title_short Domain adaptation for driver's gaze mapping for different drivers and new environments
title_sort domain adaptation for driver s gaze mapping for different drivers and new environments
topic gaze mapping
domain adaptation
visual attention
gaze regions
url http://ijain.org/index.php/IJAIN/article/view/1168
work_keys_str_mv AT ulziibayarsonomochir domainadaptationfordriversgazemappingfordifferentdriversandnewenvironments
AT stephenkarungaru domainadaptationfordriversgazemappingfordifferentdriversandnewenvironments
AT kenjiterada domainadaptationfordriversgazemappingfordifferentdriversandnewenvironments
AT altangerelayush domainadaptationfordriversgazemappingfordifferentdriversandnewenvironments