Cycle-consistent inverse GAN for text-to-image synthesis
This paper investigates an open research task of text-to-image synthesis for automatically generating or manipulating images from text descriptions. Prevailing methods mainly take the textual descriptions as the conditional input for the GAN generation, and need to train different models for the tex...
Main Authors: | Wang, Hao, Lin, Guosheng, Hoi, Steven C. H., Miao, Chunyan |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference Paper |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156034 |
Similar Items
-
Paired cross-modal data augmentation for fine-grained image-to-text retrieval
by: Wang, Hao, et al.
Published: (2023) -
A Robust Consistency Model of Crowd Workers in Text Labeling Tasks
by: Fattoh Alqershi, et al.
Published: (2020-01-01) -
TextControlGAN: Text-to-Image Synthesis with Controllable Generative Adversarial Networks
by: Hyeeun Ku, et al.
Published: (2023-04-01) -
Decomposing generation networks with structure prediction for recipe generation
by: Wang, Hao, et al.
Published: (2022) -
Learning structural representations for recipe generation and food retrieval
by: Wang, Hao, et al.
Published: (2022)