DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques; however, it continues to face challenges such as mode collapse, training instability, and low-quality output in cases of datasets with high intra-class variation. Further...
Main Authors: | Taesun Yeom, Chanhoe Gu, Minhyeok Lee |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10458911/ |
Similar Items
-
GammaGAN: Gamma-Scaled Class Embeddings for Conditional Video Generation
by: Minjae Kang, et al.
Published: (2023-09-01) -
Synthetic ECG Signal Generation Using Probabilistic Diffusion Models
by: Edmonmd Adib, et al.
Published: (2023-01-01) -
TextControlGAN: Text-to-Image Synthesis with Controllable Generative Adversarial Networks
by: Hyeeun Ku, et al.
Published: (2023-04-01) -
Okkhor-Diffusion: Class Guided Generation of Bangla Isolated Handwritten Characters Using Denoising Diffusion Probabilistic Model (DDPM)
by: Md. Mubtasim Fuad, et al.
Published: (2024-01-01) -
Universal Adversarial Training Using Auxiliary Conditional Generative Model-Based Adversarial Attack Generation
by: Hiskias Dingeto, et al.
Published: (2023-07-01)