Multi-modal deception detection in videos

Deception detection has much significance as it has many real-world applications. This project focuses on the verbal and visual modalities for deception detection from videos. The experiments were conducted on the most widely used dataset: Real-Life Trial. In the project, text information, visual in...

Full description

Bibliographic Details
Main Author: Jin, Yibing
Other Authors: Alex Chichung Kot
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158188
_version_ 1826116343755177984
author Jin, Yibing
author2 Alex Chichung Kot
author_facet Alex Chichung Kot
Jin, Yibing
author_sort Jin, Yibing
collection NTU
description Deception detection has much significance as it has many real-world applications. This project focuses on the verbal and visual modalities for deception detection from videos. The experiments were conducted on the most widely used dataset: Real-Life Trial. In the project, text information, visual information, and multimodal cues were considered individually for detecting deception. Moreover, this project used both machine learning and deep learning methods to obtain the best performance on this task. For verbal feature extraction, TF-IDF, N-Grams, and LIWC were used to transfer the text into vectors. These vectors were processed by SVM, Naïve Bayes, Random Forest, and RNN. For visual feature extraction, facial action features and gaze direction features were extracted by OpenFace. The visual features are learned by a machine learning method, SVM. This paper also used a hybrid classification model based on CNN and GRU neural networks. For the multimodal machine learning method, the features from two different modalities were concatenated together after extraction, and then the extracted features were put into SVM for classification. The experimental results suggest that the hybrid model of CNN and GRU performs best among all the methods when only information from one modality is used, and using SVM outperforms other models when two modalities are used.
first_indexed 2024-10-01T04:10:08Z
format Final Year Project (FYP)
id ntu-10356/158188
institution Nanyang Technological University
language English
last_indexed 2024-10-01T04:10:08Z
publishDate 2022
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1581882023-07-07T19:23:02Z Multi-modal deception detection in videos Jin, Yibing Alex Chichung Kot School of Electrical and Electronic Engineering Wuhan University EACKOT@ntu.edu.sg Engineering::Electrical and electronic engineering Deception detection has much significance as it has many real-world applications. This project focuses on the verbal and visual modalities for deception detection from videos. The experiments were conducted on the most widely used dataset: Real-Life Trial. In the project, text information, visual information, and multimodal cues were considered individually for detecting deception. Moreover, this project used both machine learning and deep learning methods to obtain the best performance on this task. For verbal feature extraction, TF-IDF, N-Grams, and LIWC were used to transfer the text into vectors. These vectors were processed by SVM, Naïve Bayes, Random Forest, and RNN. For visual feature extraction, facial action features and gaze direction features were extracted by OpenFace. The visual features are learned by a machine learning method, SVM. This paper also used a hybrid classification model based on CNN and GRU neural networks. For the multimodal machine learning method, the features from two different modalities were concatenated together after extraction, and then the extracted features were put into SVM for classification. The experimental results suggest that the hybrid model of CNN and GRU performs best among all the methods when only information from one modality is used, and using SVM outperforms other models when two modalities are used. Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-31T08:44:25Z 2022-05-31T08:44:25Z 2022 Final Year Project (FYP) Jin, Y. (2022). Multi-modal deception detection in videos. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158188 https://hdl.handle.net/10356/158188 en application/pdf Nanyang Technological University
spellingShingle Engineering::Electrical and electronic engineering
Jin, Yibing
Multi-modal deception detection in videos
title Multi-modal deception detection in videos
title_full Multi-modal deception detection in videos
title_fullStr Multi-modal deception detection in videos
title_full_unstemmed Multi-modal deception detection in videos
title_short Multi-modal deception detection in videos
title_sort multi modal deception detection in videos
topic Engineering::Electrical and electronic engineering
url https://hdl.handle.net/10356/158188
work_keys_str_mv AT jinyibing multimodaldeceptiondetectioninvideos