Deep image synthesis from intuitive user input: A review and perspectives
Abstract In many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, w...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2021-10-01
|
Series: | Computational Visual Media |
Subjects: | |
Online Access: | https://doi.org/10.1007/s41095-021-0234-8 |
_version_ | 1818730114770272256 |
---|---|
author | Yuan Xue Yuan-Chen Guo Han Zhang Tao Xu Song-Hai Zhang Xiaolei Huang |
author_facet | Yuan Xue Yuan-Chen Guo Han Zhang Tao Xu Song-Hai Zhang Xiaolei Huang |
author_sort | Yuan Xue |
collection | DOAJ |
description | Abstract In many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods. |
first_indexed | 2024-12-17T22:56:38Z |
format | Article |
id | doaj.art-dddc96422522427e95e6b6d650c42edc |
institution | Directory Open Access Journal |
issn | 2096-0433 2096-0662 |
language | English |
last_indexed | 2024-12-17T22:56:38Z |
publishDate | 2021-10-01 |
publisher | SpringerOpen |
record_format | Article |
series | Computational Visual Media |
spelling | doaj.art-dddc96422522427e95e6b6d650c42edc2022-12-21T21:29:32ZengSpringerOpenComputational Visual Media2096-04332096-06622021-10-018133110.1007/s41095-021-0234-8Deep image synthesis from intuitive user input: A review and perspectivesYuan Xue0Yuan-Chen Guo1Han Zhang2Tao Xu3Song-Hai Zhang4Xiaolei Huang5College of Information Sciences and Technology, the Pennsylvania State UniversityDepartment of Computer Science and Technology, Tsinghua UniversityGoogle BrainFacebookDepartment of Computer Science and Technology, Tsinghua UniversityCollege of Information Sciences and Technology, the Pennsylvania State UniversityAbstract In many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.https://doi.org/10.1007/s41095-021-0234-8image synthesisintuitive user inputdeep generative modelssynthesized image quality evaluation |
spellingShingle | Yuan Xue Yuan-Chen Guo Han Zhang Tao Xu Song-Hai Zhang Xiaolei Huang Deep image synthesis from intuitive user input: A review and perspectives Computational Visual Media image synthesis intuitive user input deep generative models synthesized image quality evaluation |
title | Deep image synthesis from intuitive user input: A review and perspectives |
title_full | Deep image synthesis from intuitive user input: A review and perspectives |
title_fullStr | Deep image synthesis from intuitive user input: A review and perspectives |
title_full_unstemmed | Deep image synthesis from intuitive user input: A review and perspectives |
title_short | Deep image synthesis from intuitive user input: A review and perspectives |
title_sort | deep image synthesis from intuitive user input a review and perspectives |
topic | image synthesis intuitive user input deep generative models synthesized image quality evaluation |
url | https://doi.org/10.1007/s41095-021-0234-8 |
work_keys_str_mv | AT yuanxue deepimagesynthesisfromintuitiveuserinputareviewandperspectives AT yuanchenguo deepimagesynthesisfromintuitiveuserinputareviewandperspectives AT hanzhang deepimagesynthesisfromintuitiveuserinputareviewandperspectives AT taoxu deepimagesynthesisfromintuitiveuserinputareviewandperspectives AT songhaizhang deepimagesynthesisfromintuitiveuserinputareviewandperspectives AT xiaoleihuang deepimagesynthesisfromintuitiveuserinputareviewandperspectives |