Texture networks: Feed-forward synthesis of textures and stylized images
Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a le...
Main Authors: | , , , |
---|---|
Format: | Conference item |
Published: |
Association for Computing Machinery
2016
|
_version_ | 1797076428768935936 |
---|---|
author | Ulyanov, D Lebedev, V Vedaldi, A Lempitsky, V |
author_facet | Ulyanov, D Lebedev, V Vedaldi, A Lempitsky, V |
author_sort | Ulyanov, D |
collection | OXFORD |
description | Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions. |
first_indexed | 2024-03-07T00:03:38Z |
format | Conference item |
id | oxford-uuid:76d0f6d6-00f4-4c9b-a6a8-b12c2e68b1a7 |
institution | University of Oxford |
last_indexed | 2024-03-07T00:03:38Z |
publishDate | 2016 |
publisher | Association for Computing Machinery |
record_format | dspace |
spelling | oxford-uuid:76d0f6d6-00f4-4c9b-a6a8-b12c2e68b1a72022-03-26T20:18:49ZTexture networks: Feed-forward synthesis of textures and stylized imagesConference itemhttp://purl.org/coar/resource_type/c_5794uuid:76d0f6d6-00f4-4c9b-a6a8-b12c2e68b1a7Symplectic Elements at OxfordAssociation for Computing Machinery2016Ulyanov, DLebedev, VVedaldi, ALempitsky, VGatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions. |
spellingShingle | Ulyanov, D Lebedev, V Vedaldi, A Lempitsky, V Texture networks: Feed-forward synthesis of textures and stylized images |
title | Texture networks: Feed-forward synthesis of textures and stylized images |
title_full | Texture networks: Feed-forward synthesis of textures and stylized images |
title_fullStr | Texture networks: Feed-forward synthesis of textures and stylized images |
title_full_unstemmed | Texture networks: Feed-forward synthesis of textures and stylized images |
title_short | Texture networks: Feed-forward synthesis of textures and stylized images |
title_sort | texture networks feed forward synthesis of textures and stylized images |
work_keys_str_mv | AT ulyanovd texturenetworksfeedforwardsynthesisoftexturesandstylizedimages AT lebedevv texturenetworksfeedforwardsynthesisoftexturesandstylizedimages AT vedaldia texturenetworksfeedforwardsynthesisoftexturesandstylizedimages AT lempitskyv texturenetworksfeedforwardsynthesisoftexturesandstylizedimages |