Learning Self-distilled Features for Facial Deepfake Detection Using Visual Foundation Models: General Results and Demographic Analysis

Modern deepfake techniques produce highly realistic false media content with the potential for spreading harmful information, including fake news and incitements to violence. Deepfake detection methods aim to identify and counteract such content by employing machine learning algorithms, focusing ma...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Yan Martins Braz Gurevitz Cunha, Bruno Rocha Gomes, José Matheus C. Boaro, Daniel de Sousa Moraes, Antonio José Grandson Busson, Julio Cesar Duarte, Sérgio Colcher
Định dạng: Bài viết
Ngôn ngữ:English
Được phát hành: Brazilian Computer Society 2024-07-01
Loạt:Journal on Interactive Systems
Những chủ đề:
Truy cập trực tuyến:https://journals-sol.sbc.org.br/index.php/jis/article/view/4120
Miêu tả
Tóm tắt:Modern deepfake techniques produce highly realistic false media content with the potential for spreading harmful information, including fake news and incitements to violence. Deepfake detection methods aim to identify and counteract such content by employing machine learning algorithms, focusing mainly on detecting the presence of manipulation using spatial and temporal features. These methods often utilize Foundation Models trained on extensive unlabeled data through self-supervised approaches. This work extends previous research on deepfake detection, focusing on the effectiveness of these models while also considering biases, particularly concerning age, gender, and ethnicity, for ethical analysis. Experiments with DINOv2, a novel Vision Transformer-based Foundation Model, trained using the diverse Deepfake Detection Challenge Dataset, which encompasses several lighting conditions, resolutions, and demographic attributes, demonstrated improved deepfake detection when combined with a CNN classifier, with minimal bias towards these demographic characteristics.
số ISSN:2763-7719