Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification

The growing interest in Intelligent Vehicles (IV) worldwide have made it possible to operate vehicles with varying degrees of autonomy. Driver Activity Recognition (DAR) is an important area of research which aims to reduce road accidents caused by distracted driving. Past research conducted have a...

Full description

Bibliographic Details
Main Author: Lim, Cai Yin
Other Authors: Lyu Chen
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167301
_version_ 1826116974971715584
author Lim, Cai Yin
author2 Lyu Chen
author_facet Lyu Chen
Lim, Cai Yin
author_sort Lim, Cai Yin
collection NTU
description The growing interest in Intelligent Vehicles (IV) worldwide have made it possible to operate vehicles with varying degrees of autonomy. Driver Activity Recognition (DAR) is an important area of research which aims to reduce road accidents caused by distracted driving. Past research conducted have a strong emphasis on using datasets in optimal condition to achieve high accuracy. The lack of variations in data for the models to train may not perform as well in real time as it has been optimised. Therefore, this report aims to develop a deep learning model which has the lowest computational cost while maintaining a high accuracy. In this research, Convolutional Neural Network (CNN), transfer learning and two-stream neural network will be investigated. The dataset used to train models are Kaggle for single-frame and Ben Khalifa’s for multi-frame models. 2D CNN and 2D CNN with transfer learning was applied on single frame dataset while 3D CNN with transfer learning, Two-Stream transfer learning and Two-Stream CNN was used on multi-frame dataset. 2D CNN with transfer learning, ResNet18, performs the best for single-frame dataset with a high accuracy of 99.44%. Two-Stream transfer learning with RGB and optical flow was able to attain an accuracy of 97.16% for multi-frame dataset. The best performing model for each type was then compared before selecting the final model for testing with real-world data.
first_indexed 2024-10-01T04:20:12Z
format Final Year Project (FYP)
id ntu-10356/167301
institution Nanyang Technological University
language English
last_indexed 2024-10-01T04:20:12Z
publishDate 2023
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1673012023-07-06T08:45:18Z Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification Lim, Cai Yin Lyu Chen School of Mechanical and Aerospace Engineering lyuchen@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence The growing interest in Intelligent Vehicles (IV) worldwide have made it possible to operate vehicles with varying degrees of autonomy. Driver Activity Recognition (DAR) is an important area of research which aims to reduce road accidents caused by distracted driving. Past research conducted have a strong emphasis on using datasets in optimal condition to achieve high accuracy. The lack of variations in data for the models to train may not perform as well in real time as it has been optimised. Therefore, this report aims to develop a deep learning model which has the lowest computational cost while maintaining a high accuracy. In this research, Convolutional Neural Network (CNN), transfer learning and two-stream neural network will be investigated. The dataset used to train models are Kaggle for single-frame and Ben Khalifa’s for multi-frame models. 2D CNN and 2D CNN with transfer learning was applied on single frame dataset while 3D CNN with transfer learning, Two-Stream transfer learning and Two-Stream CNN was used on multi-frame dataset. 2D CNN with transfer learning, ResNet18, performs the best for single-frame dataset with a high accuracy of 99.44%. Two-Stream transfer learning with RGB and optical flow was able to attain an accuracy of 97.16% for multi-frame dataset. The best performing model for each type was then compared before selecting the final model for testing with real-world data. Bachelor of Engineering (Mechanical Engineering) 2023-05-25T08:54:35Z 2023-05-25T08:54:35Z 2023 Final Year Project (FYP) Lim, C. Y. (2023). Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167301 https://hdl.handle.net/10356/167301 en C073 application/pdf Nanyang Technological University
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Lim, Cai Yin
Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_full Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_fullStr Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_full_unstemmed Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_short Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_sort driver state monitoring for intelligent vehicles part i in cabin activity identification
topic Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
url https://hdl.handle.net/10356/167301
work_keys_str_mv AT limcaiyin driverstatemonitoringforintelligentvehiclespartiincabinactivityidentification