High-Resolution Neural Network for Driver Visual Attention Prediction
Driving is a task that puts heavy demands on visual information, thereby the human visual system plays a critical role in making proper decisions for safe driving. Understanding a driver’s visual attention and relevant behavior information is a challenging but essential task in advanced driver-assis...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-04-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/20/7/2030 |
_version_ | 1797571489457766400 |
---|---|
author | Byeongkeun Kang Yeejin Lee |
author_facet | Byeongkeun Kang Yeejin Lee |
author_sort | Byeongkeun Kang |
collection | DOAJ |
description | Driving is a task that puts heavy demands on visual information, thereby the human visual system plays a critical role in making proper decisions for safe driving. Understanding a driver’s visual attention and relevant behavior information is a challenging but essential task in advanced driver-assistance systems (ADAS) and efficient autonomous vehicles (AV). Specifically, robust prediction of a driver’s attention from images could be a crucial key to assist intelligent vehicle systems where a self-driving car is required to move safely interacting with the surrounding environment. Thus, in this paper, we investigate a human driver’s visual behavior in terms of computer vision to estimate the driver’s attention locations in images. First, we show that feature representations at high resolution improves visual attention prediction accuracy and localization performance when being fused with features at low-resolution. To demonstrate this, we employ a deep convolutional neural network framework that learns and extracts feature representations at multiple resolutions. In particular, the network maintains the feature representation with the highest resolution at the original image resolution. Second, attention prediction tends to be biased toward centers of images when neural networks are trained using typical visual attention datasets. To avoid overfitting to the center-biased solution, the network is trained using diverse regions of images. Finally, the experimental results verify that our proposed framework improves the prediction accuracy of a driver’s attention locations. |
first_indexed | 2024-03-10T20:40:26Z |
format | Article |
id | doaj.art-cfa6363d5d734e7dbe781707f1796903 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-10T20:40:26Z |
publishDate | 2020-04-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-cfa6363d5d734e7dbe781707f17969032023-11-19T20:43:01ZengMDPI AGSensors1424-82202020-04-01207203010.3390/s20072030High-Resolution Neural Network for Driver Visual Attention PredictionByeongkeun Kang0Yeejin Lee1Department of Electronic and IT Media Engineering, Seoul National University of Science and Technology, Seoul 139-743, KoreaDepartment of Electrical and Information Engineering, Seoul National University of Science and Technology, Seoul 139-743, KoreaDriving is a task that puts heavy demands on visual information, thereby the human visual system plays a critical role in making proper decisions for safe driving. Understanding a driver’s visual attention and relevant behavior information is a challenging but essential task in advanced driver-assistance systems (ADAS) and efficient autonomous vehicles (AV). Specifically, robust prediction of a driver’s attention from images could be a crucial key to assist intelligent vehicle systems where a self-driving car is required to move safely interacting with the surrounding environment. Thus, in this paper, we investigate a human driver’s visual behavior in terms of computer vision to estimate the driver’s attention locations in images. First, we show that feature representations at high resolution improves visual attention prediction accuracy and localization performance when being fused with features at low-resolution. To demonstrate this, we employ a deep convolutional neural network framework that learns and extracts feature representations at multiple resolutions. In particular, the network maintains the feature representation with the highest resolution at the original image resolution. Second, attention prediction tends to be biased toward centers of images when neural networks are trained using typical visual attention datasets. To avoid overfitting to the center-biased solution, the network is trained using diverse regions of images. Finally, the experimental results verify that our proposed framework improves the prediction accuracy of a driver’s attention locations.https://www.mdpi.com/1424-8220/20/7/2030saliency estimationvisual attention estimationdriver perception modelingintelligent vehicle systemconvolutional neural networks |
spellingShingle | Byeongkeun Kang Yeejin Lee High-Resolution Neural Network for Driver Visual Attention Prediction Sensors saliency estimation visual attention estimation driver perception modeling intelligent vehicle system convolutional neural networks |
title | High-Resolution Neural Network for Driver Visual Attention Prediction |
title_full | High-Resolution Neural Network for Driver Visual Attention Prediction |
title_fullStr | High-Resolution Neural Network for Driver Visual Attention Prediction |
title_full_unstemmed | High-Resolution Neural Network for Driver Visual Attention Prediction |
title_short | High-Resolution Neural Network for Driver Visual Attention Prediction |
title_sort | high resolution neural network for driver visual attention prediction |
topic | saliency estimation visual attention estimation driver perception modeling intelligent vehicle system convolutional neural networks |
url | https://www.mdpi.com/1424-8220/20/7/2030 |
work_keys_str_mv | AT byeongkeunkang highresolutionneuralnetworkfordrivervisualattentionprediction AT yeejinlee highresolutionneuralnetworkfordrivervisualattentionprediction |