Deepfake detection

With the rapid development of synthetic image generation and manipulation, there is a huge breakthrough in the manipulation of human faces and there are more and more automated ways to manipulate faces to convey misleading information about some target identities rather than the past tedious and man...

Full description

Bibliographic Details
Main Author: Wang, Ying
Other Authors: Chen Change Loy
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166055
Description
Summary:With the rapid development of synthetic image generation and manipulation, there is a huge breakthrough in the manipulation of human faces and there are more and more automated ways to manipulate faces to convey misleading information about some target identities rather than the past tedious and manual cumbersome face editing processes. As a result of that, more and more applications are gradually emerging. For instance, the manipulation of the facial area of an image and generating a new image, i.e., changing the identities or modifying the face attributes. As a matter of fact, humans have always been interested in studying human faces and this field can be classified as a well-examined field. In addition, there are many related entertainment applications in this field, such as the use of facial replacement technology to replace the user’s face in a movie clip, or the use of expression replay technology to drive a static portrait of a famous person. However, current face deep forging technology is still in a rapid development stage, and its generated sense of reality and naturalness still need to be further improved. On the other hand, this kind of facial deep forging technology is also easy to be maliciously used by criminals to make pornographic movies and false news, and even used by political figures to create political rumors, which brings great potential threats to national security and social stability. Thus, detection methods are needed to determine whether a video or image is being manipulated. Nevertheless, this problem is not easy in the real world, because we often need to detect the face without knowing how the image was manipulated. Since the rapid emergence of new face forgery manipulations and different classifications of perturbations, the key challenge of this binary classification problem in the real world is generalizability, which means that fake images with unknown patterns easily cause existing approaches to fail. And in this project, I examined several state-of-the-art methods on popular datasets to detect different prominent representatives of facial manipulations, i.e., DeepFakes, Face2Face, FaceSwap, and NeuralTextures. In particular, I first reproduced a baseline method in the Deepfake detection field, named Xception. Then, I tried other more advanced methods, i.e., the REConstruction-Classification lEarning framework called RECCE, by understanding their underline theories. Based on the empirical results obtained from experiments, I performed a thorough analysis and tried other different datasets to do comparisons and achieve better model performance. Keywords: Deepfake Detection, Generalizability, Facial Manipulation Method, Face Forgery