A Driver’s Visual Attention Prediction Using Optical Flow
Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in compute...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-05-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/11/3722 |
_version_ | 1797532497265819648 |
---|---|
author | Byeongkeun Kang Yeejin Lee |
author_facet | Byeongkeun Kang Yeejin Lee |
author_sort | Byeongkeun Kang |
collection | DOAJ |
description | Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features. |
first_indexed | 2024-03-10T11:00:01Z |
format | Article |
id | doaj.art-f720068f7c724293ad95f3493dea01c6 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-10T11:00:01Z |
publishDate | 2021-05-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-f720068f7c724293ad95f3493dea01c62023-11-21T21:36:19ZengMDPI AGSensors1424-82202021-05-012111372210.3390/s21113722A Driver’s Visual Attention Prediction Using Optical FlowByeongkeun Kang0Yeejin Lee1Department of Electronic and IT Media Engineering, Seoul National University of Science and Technology, Seoul 01811, KoreaDepartment of Electrical and Information Engineering, Seoul National University of Science and Technology, Seoul 01811, KoreaMotion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.https://www.mdpi.com/1424-8220/21/11/3722visual attention estimationoptical flowdriver’s perception modelingintelligent vehicle systemconvolutional neural networks |
spellingShingle | Byeongkeun Kang Yeejin Lee A Driver’s Visual Attention Prediction Using Optical Flow Sensors visual attention estimation optical flow driver’s perception modeling intelligent vehicle system convolutional neural networks |
title | A Driver’s Visual Attention Prediction Using Optical Flow |
title_full | A Driver’s Visual Attention Prediction Using Optical Flow |
title_fullStr | A Driver’s Visual Attention Prediction Using Optical Flow |
title_full_unstemmed | A Driver’s Visual Attention Prediction Using Optical Flow |
title_short | A Driver’s Visual Attention Prediction Using Optical Flow |
title_sort | driver s visual attention prediction using optical flow |
topic | visual attention estimation optical flow driver’s perception modeling intelligent vehicle system convolutional neural networks |
url | https://www.mdpi.com/1424-8220/21/11/3722 |
work_keys_str_mv | AT byeongkeunkang adriversvisualattentionpredictionusingopticalflow AT yeejinlee adriversvisualattentionpredictionusingopticalflow AT byeongkeunkang driversvisualattentionpredictionusingopticalflow AT yeejinlee driversvisualattentionpredictionusingopticalflow |