Fooled twice: People cannot detect deepfakes but think they can
Summary: Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N = 210), we show that (1) people cannot relia...
Main Authors: | Nils C. Köbis, Barbora Doležalová, Ivan Soraperra |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2021-11-01
|
Series: | iScience |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2589004221013353 |
Similar Items
-
CNNs reveal the computational implausibility of the expertise hypothesis
by: Nancy Kanwisher, et al.
Published: (2023-02-01) -
Brain-inspired classical conditioning model
by: Yuxuan Zhao, et al.
Published: (2021-01-01) -
Machine learning-based clustering and classification of mouse behaviors via respiratory patterns
by: Emma Janke, et al.
Published: (2022-12-01) -
More reliable biomarkers and more accurate prediction for mental disorders using a label-noise filtering-based dimensional prediction method
by: Ying Xing, et al.
Published: (2024-03-01) -
How do we think machines think? An fMRI study of alleged competition with an artificial intelligence
by: Thierry eChaminade, et al.
Published: (2012-05-01)