Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a Case
Dashcams are considered video sensors, and the number of dashcams installed in vehicles is increasing. Native dashcam video players can be used to view evidence during investigations, but these players are not accepted in court and cannot be used to extract metadata. Digital forensic tools, such as...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-08-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/23/17/7548 |
_version_ | 1797581856223264768 |
---|---|
author | Yousef-Awwad Daraghmi Ibrahim Shawahna |
author_facet | Yousef-Awwad Daraghmi Ibrahim Shawahna |
author_sort | Yousef-Awwad Daraghmi |
collection | DOAJ |
description | Dashcams are considered video sensors, and the number of dashcams installed in vehicles is increasing. Native dashcam video players can be used to view evidence during investigations, but these players are not accepted in court and cannot be used to extract metadata. Digital forensic tools, such as FTK, Autopsy and Encase, are specifically designed for functions and scripts and do not perform well in extracting metadata. Therefore, this paper proposes a dashcam forensics framework for extracting evidential text including time, date, speed, GPS coordinates and speed units using accurate optical character recognition methods. The framework also transcribes evidential speech related to lane departure and collision warning for enabling automatic analysis. The proposed framework associates the spatial and temporal evidential data with a map, enabling investigators to review the evidence along the vehicle’s trip. The framework was evaluated using real-life videos, and different optical character recognition (OCR) methods and speech-to-text conversion methods were tested. This paper identifies that Tesseract is the most accurate OCR method that can be used to extract text from dashcam videos. Also, the Google speech-to-text API is the most accurate, while Mozilla’s DeepSpeech is more acceptable because it works offline. The framework was compared with other digital forensic tools, such as Belkasoft, and the framework was found to be more effective as it allows automatic analysis of dashcam evidence and generates digital forensic reports associated with a map displaying the evidence along the trip. |
first_indexed | 2024-03-10T23:12:29Z |
format | Article |
id | doaj.art-8cdefec33b654387a1305c9b64203cc6 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-10T23:12:29Z |
publishDate | 2023-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-8cdefec33b654387a1305c9b64203cc62023-11-19T08:51:22ZengMDPI AGSensors1424-82202023-08-012317754810.3390/s23177548Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a CaseYousef-Awwad Daraghmi0Ibrahim Shawahna1Computer Systems Engineering Department, Palestine Technical University—Kadoorie, Tulkarem P305, PalestineService Delivery Department, ASAL Technologies LLC., Rawabi P666, PalestineDashcams are considered video sensors, and the number of dashcams installed in vehicles is increasing. Native dashcam video players can be used to view evidence during investigations, but these players are not accepted in court and cannot be used to extract metadata. Digital forensic tools, such as FTK, Autopsy and Encase, are specifically designed for functions and scripts and do not perform well in extracting metadata. Therefore, this paper proposes a dashcam forensics framework for extracting evidential text including time, date, speed, GPS coordinates and speed units using accurate optical character recognition methods. The framework also transcribes evidential speech related to lane departure and collision warning for enabling automatic analysis. The proposed framework associates the spatial and temporal evidential data with a map, enabling investigators to review the evidence along the vehicle’s trip. The framework was evaluated using real-life videos, and different optical character recognition (OCR) methods and speech-to-text conversion methods were tested. This paper identifies that Tesseract is the most accurate OCR method that can be used to extract text from dashcam videos. Also, the Google speech-to-text API is the most accurate, while Mozilla’s DeepSpeech is more acceptable because it works offline. The framework was compared with other digital forensic tools, such as Belkasoft, and the framework was found to be more effective as it allows automatic analysis of dashcam evidence and generates digital forensic reports associated with a map displaying the evidence along the trip.https://www.mdpi.com/1424-8220/23/17/7548digital forensicsdashcamsvideo artifacts |
spellingShingle | Yousef-Awwad Daraghmi Ibrahim Shawahna Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a Case Sensors digital forensics dashcams video artifacts |
title | Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a Case |
title_full | Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a Case |
title_fullStr | Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a Case |
title_full_unstemmed | Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a Case |
title_short | Digital Forensic Analysis of Vehicular Video Sensors: Dashcams as a Case |
title_sort | digital forensic analysis of vehicular video sensors dashcams as a case |
topic | digital forensics dashcams video artifacts |
url | https://www.mdpi.com/1424-8220/23/17/7548 |
work_keys_str_mv | AT yousefawwaddaraghmi digitalforensicanalysisofvehicularvideosensorsdashcamsasacase AT ibrahimshawahna digitalforensicanalysisofvehicularvideosensorsdashcamsasacase |