No matter small or big lip motion: DeepFake detection with regularized feature learning on semantic information

The use of DeepFake technologies to create hyper-realistic faces has sparked serious security concerns. Recent advances on DeepFake detection showed promise on algorithm generalization to unseen manipulation methods by identifying high-level semantic irregularities. However, the extracted features a...

Full description

Bibliographic Details
Main Author: Yang, Zhiyuan
Other Authors: Wen Bihan
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178711
Description
Summary:The use of DeepFake technologies to create hyper-realistic faces has sparked serious security concerns. Recent advances on DeepFake detection showed promise on algorithm generalization to unseen manipulation methods by identifying high-level semantic irregularities. However, the extracted features are not always robust, as the sample variations such as different motion magnitudes can easily degrade the feature-vector representations of their semantic information. In this work, we propose DTNet, a novel deep method that further regularizes feature learning toward more robust DeepFake Detection. To be specific, the proposed DTNet contains Deviation Regularization that penalizes samples with deviated motion magnitudes in the loss function, and Temporal Continuity Preservation, which helps keep and learn patterns of temporal continuity in feature space regardless of motion magnitudes. Experimental results show that our method effectively mitigates the impact of motion magnitudes on feature vectors, thereby improving the generalization ability.