Texture networks: Feed-forward synthesis of textures and stylized images

Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a le...

সম্পূর্ণ বিবরণ

গ্রন্থ-পঞ্জীর বিবরন
প্রধান লেখক: Ulyanov, D, Lebedev, V, Vedaldi, A, Lempitsky, V
বিন্যাস: Conference item
প্রকাশিত: Association for Computing Machinery 2016
বিবরন
সংক্ষিপ্ত:Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.