Texture networks: Feed-forward synthesis of textures and stylized images
Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a le...
主要な著者: | Ulyanov, D, Lebedev, V, Vedaldi, A, Lempitsky, V |
---|---|
フォーマット: | Conference item |
出版事項: |
Association for Computing Machinery
2016
|
類似資料
-
Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis
著者:: Ulyanov, D, 等
出版事項: (2017) -
Video Texture Synthesis Based on Flow-Like Stylization Painting
著者:: Qian Wenhua, 等
出版事項: (2014-01-01) -
Deep image prior
著者:: Ulyanov, D, 等
出版事項: (2018) -
Deep image prior
著者:: Ulyanov, D, 等
出版事項: (2020) -
It takes (only) two: adversarial generator-encoder networks
著者:: Ulyanov, D, 等
出版事項: (2018)