ManiGAN: Text-guided image manipulation
The goal of our paper is to semantically edit parts of an image matching a given text that describes desired attributes (e.g., texture, colour, and background), while preserving other contents that are irrelevant to the text. To achieve this, we propose a novel generative adversarial network (ManiGA...
Main Authors: | Li, B, Qi, X, Lukasiewicz, T, Torr, PHS |
---|---|
Format: | Conference item |
Language: | English |
Published: |
IEEE
2020
|
Similar Items
-
Lightweight generative adversarial networks for text-guided image manipulation
by: Li, B, et al.
Published: (2020) -
Image-to-image translation with text guidance
by: Li, B, et al.
Published: (2023) -
Memory-driven text-to-image generation
by: Li, B, et al.
Published: (2022) -
TolerantGAN: Text-Guided Image Manipulation Tolerant to Real-World Image
by: Yuto Watanabe, et al.
Published: (2024-01-01) -
Controllable text-to-image generation
by: Li, B, et al.
Published: (2019)