What Makes Fake Images Detectable? Understanding Properties that Generalize

The quality of image generation and manipulation is reaching impressive levels, making it increasingly difficult for a human to distinguish between what is real and what is fake. However, deep networks can still pick up on the subtle artifacts in these doctored images. We seek to understand what pro...

Full description

Bibliographic Details
Main Authors: Chai, Lucy, Bau, David, Isola, Phillip John
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: Springer International Publishing 2021
Online Access:https://hdl.handle.net/1721.1/129437.2
_version_ 1811076132449026048
author Chai, Lucy
Bau, David
Isola, Phillip John
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Chai, Lucy
Bau, David
Isola, Phillip John
author_sort Chai, Lucy
collection MIT
description The quality of image generation and manipulation is reaching impressive levels, making it increasingly difficult for a human to distinguish between what is real and what is fake. However, deep networks can still pick up on the subtle artifacts in these doctored images. We seek to understand what properties of fake images make them detectable and identify what generalizes across different model architectures, datasets, and variations in training. We use a patch-based classifier with limited receptive fields to visualize which regions of fake images are more easily detectable. We further show a technique to exaggerate these detectable properties and demonstrate that, even when the image generator is adversarially finetuned against a fake image classifier, it is still imperfect and leaves detectable artifacts in certain image patches. Code is available at https://github.com/chail/patch-forensics.
first_indexed 2024-09-23T10:16:43Z
format Article
id mit-1721.1/129437.2
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T10:16:43Z
publishDate 2021
publisher Springer International Publishing
record_format dspace
spelling mit-1721.1/129437.22021-09-09T18:01:58Z What Makes Fake Images Detectable? Understanding Properties that Generalize Chai, Lucy Bau, David Isola, Phillip John Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory The quality of image generation and manipulation is reaching impressive levels, making it increasingly difficult for a human to distinguish between what is real and what is fake. However, deep networks can still pick up on the subtle artifacts in these doctored images. We seek to understand what properties of fake images make them detectable and identify what generalizes across different model architectures, datasets, and variations in training. We use a patch-based classifier with limited receptive fields to visualize which regions of fake images are more easily detectable. We further show a technique to exaggerate these detectable properties and demonstrate that, even when the image generator is adversarially finetuned against a fake image classifier, it is still imperfect and leaves detectable artifacts in certain image patches. Code is available at https://github.com/chail/patch-forensics. 2021-09-09T18:01:57Z 2021-01-19T15:02:04Z 2021-09-09T18:01:57Z 2020-08 2020-08 2020-12-18T18:35:55Z Article http://purl.org/eprint/type/ConferencePaper 0302-9743 https://hdl.handle.net/1721.1/129437.2 Chai, Lucy et al. “What Makes Fake Images Detectable? Understanding Properties that Generalize.” ECCV 2020: 16th European Conference on Computer Vision, Lecture Notes in Computer Science, 12371. © 2020 The Author(s) en http://dx.doi.org/10.1007/978-3-030-58574-7_7 Lecture Notes in Computer Science Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/octet-stream Springer International Publishing arXiv
spellingShingle Chai, Lucy
Bau, David
Isola, Phillip John
What Makes Fake Images Detectable? Understanding Properties that Generalize
title What Makes Fake Images Detectable? Understanding Properties that Generalize
title_full What Makes Fake Images Detectable? Understanding Properties that Generalize
title_fullStr What Makes Fake Images Detectable? Understanding Properties that Generalize
title_full_unstemmed What Makes Fake Images Detectable? Understanding Properties that Generalize
title_short What Makes Fake Images Detectable? Understanding Properties that Generalize
title_sort what makes fake images detectable understanding properties that generalize
url https://hdl.handle.net/1721.1/129437.2
work_keys_str_mv AT chailucy whatmakesfakeimagesdetectableunderstandingpropertiesthatgeneralize
AT baudavid whatmakesfakeimagesdetectableunderstandingpropertiesthatgeneralize
AT isolaphillipjohn whatmakesfakeimagesdetectableunderstandingpropertiesthatgeneralize