Deep image prior

Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a gene...

Descripció completa

Dades bibliogràfiques
Autors principals: Ulyanov, D, Vedaldi, A, Lempitsky, V
Format: Journal article
Idioma:English
Publicat: Springer 2020
_version_ 1826278617107136512
author Ulyanov, D
Vedaldi, A
Lempitsky, V
author_facet Ulyanov, D
Vedaldi, A
Lempitsky, V
author_sort Ulyanov, D
collection OXFORD
description Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity (Code and supplementary material are available at https://dmitryulyanov.github.io/deep_image_prior).
first_indexed 2024-03-06T23:46:36Z
format Journal article
id oxford-uuid:71230132-096c-4e1b-9e46-f787d12c97ff
institution University of Oxford
language English
last_indexed 2024-03-06T23:46:36Z
publishDate 2020
publisher Springer
record_format dspace
spelling oxford-uuid:71230132-096c-4e1b-9e46-f787d12c97ff2022-03-26T19:41:37ZDeep image priorJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:71230132-096c-4e1b-9e46-f787d12c97ffEnglishSymplectic ElementsSpringer2020Ulyanov, DVedaldi, ALempitsky, VDeep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity (Code and supplementary material are available at https://dmitryulyanov.github.io/deep_image_prior).
spellingShingle Ulyanov, D
Vedaldi, A
Lempitsky, V
Deep image prior
title Deep image prior
title_full Deep image prior
title_fullStr Deep image prior
title_full_unstemmed Deep image prior
title_short Deep image prior
title_sort deep image prior
work_keys_str_mv AT ulyanovd deepimageprior
AT vedaldia deepimageprior
AT lempitskyv deepimageprior