End-to-End Deep One-Class Learning for Anomaly Detection in UAV Video Stream

In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised mode to solve this probl...

全面介紹

書目詳細資料
Main Authors: Slim Hamdi, Samir Bouindour, Hichem Snoussi, Tian Wang, Mohamed Abid
格式: Article
語言:English
出版: MDPI AG 2021-05-01
叢編:Journal of Imaging
主題:
在線閱讀:https://www.mdpi.com/2313-433X/7/5/90
實物特徵
總結:In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised mode to solve this problem becomes fundamental. In this context, we propose a new end-to-end architecture capable of generating optical flow images from original UAV images and extracting compact spatio-temporal characteristics for anomaly detection purposes. It is designed with a custom loss function as a sum of three terms, the reconstruction loss (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>R</mi><mi>l</mi></msub></semantics></math></inline-formula>), the generation loss (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>G</mi><mi>l</mi></msub></semantics></math></inline-formula>) and the compactness loss (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>C</mi><mi>l</mi></msub></semantics></math></inline-formula>) to ensure an efficient classification of the “deep-one” class. In addition, we propose to minimize the effect of UAV motion in video processing by applying background subtraction on optical flow images. We tested our method on very complex datasets called the mini-drone video dataset, and obtained results surpassing existing techniques’ performances with an AUC of 85.3.
ISSN:2313-433X