Video analytics based on deep learning and information fusion technologies

In recent years, video analytics has risen to become a popular topic in the field of Artificial Intelligence. With the advancement in high-speed connection, machine learning algorithms and IoT technologies, the applications of video analytics using multiple modalities and information fusion technolo...

Full description

Bibliographic Details
Main Author: Lee, Zheng Han
Other Authors: Mao Kezhi
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/139262
Description
Summary:In recent years, video analytics has risen to become a popular topic in the field of Artificial Intelligence. With the advancement in high-speed connection, machine learning algorithms and IoT technologies, the applications of video analytics using multiple modalities and information fusion technologies is becoming a commodity to everyone in the Information Age and the coming future. Most studies done in this topic previously focused on pushing the boundaries of algorithms for the applications of information fusion, such as Audio-visual correspondence task (AVC) and video-scene segmentation. This study aims to explore the optimization of video analytics based on information fusion technologies by using C3D-based action recognition function as the benchmark for video analytics performance. By scrutinizing and testing the mechanisms and architectures of the C3D-based action model, the best performing elements and the reasons behind their performances are explored. The types of pooling, optimizer and scheduler and their respective accuracies with the dataset used are recorded. The different methods of fusion of visual-audio information and their introduction into the action recognition model are explored. Their executions and respective accuracies are studied to get insights on how they affect the model’s performance. The feature extraction methods for the audio modality with their respective performance are also studied. Different self-attention mechanisms involving the modalities and channels are implemented in the model and the resulting accuracies studied. These explorations provide understandings on how they affect the performance of video analytics based on information fusion and subsequently help to unleash its full potential.