Diverse Image Generation via Self-Conditioned GANs
© 2020 IEEE. We introduce a simple but effective unsupervised method for generating diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feat...
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2021
|
Online Access: | https://hdl.handle.net/1721.1/137599 |
_version_ | 1826207809084063744 |
---|---|
author | Liu, Steven Wang, Tongzhou Bau, David Zhu, Jun-Yan Torralba, Antonio |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Liu, Steven Wang, Tongzhou Bau, David Zhu, Jun-Yan Torralba, Antonio |
author_sort | Liu, Steven |
collection | MIT |
description | © 2020 IEEE. We introduce a simple but effective unsupervised method for generating diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space. Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them. Experiments on standard mode collapse benchmarks show that our method outperforms several competing methods when addressing mode collapse. Our method also performs well on large-scale datasets such as ImageNet and Places365, improving both diversity and standard metrics (e.g., Fréchet Inception Distance), compared to previous methods. |
first_indexed | 2024-09-23T13:55:17Z |
format | Article |
id | mit-1721.1/137599 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T13:55:17Z |
publishDate | 2021 |
publisher | Institute of Electrical and Electronics Engineers (IEEE) |
record_format | dspace |
spelling | mit-1721.1/1375992023-04-10T14:44:33Z Diverse Image Generation via Self-Conditioned GANs Liu, Steven Wang, Tongzhou Bau, David Zhu, Jun-Yan Torralba, Antonio Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory © 2020 IEEE. We introduce a simple but effective unsupervised method for generating diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space. Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them. Experiments on standard mode collapse benchmarks show that our method outperforms several competing methods when addressing mode collapse. Our method also performs well on large-scale datasets such as ImageNet and Places365, improving both diversity and standard metrics (e.g., Fréchet Inception Distance), compared to previous methods. 2021-11-05T19:32:43Z 2021-11-05T19:32:43Z 2020 2021-01-28T14:51:26Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137599 Liu, Steven, Wang, Tongzhou, Bau, David, Zhu, Jun-Yan and Torralba, Antonio. 2020. "Diverse Image Generation via Self-Conditioned GANs." Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. en 10.1109/CVPR42600.2020.01429 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv |
spellingShingle | Liu, Steven Wang, Tongzhou Bau, David Zhu, Jun-Yan Torralba, Antonio Diverse Image Generation via Self-Conditioned GANs |
title | Diverse Image Generation via Self-Conditioned GANs |
title_full | Diverse Image Generation via Self-Conditioned GANs |
title_fullStr | Diverse Image Generation via Self-Conditioned GANs |
title_full_unstemmed | Diverse Image Generation via Self-Conditioned GANs |
title_short | Diverse Image Generation via Self-Conditioned GANs |
title_sort | diverse image generation via self conditioned gans |
url | https://hdl.handle.net/1721.1/137599 |
work_keys_str_mv | AT liusteven diverseimagegenerationviaselfconditionedgans AT wangtongzhou diverseimagegenerationviaselfconditionedgans AT baudavid diverseimagegenerationviaselfconditionedgans AT zhujunyan diverseimagegenerationviaselfconditionedgans AT torralbaantonio diverseimagegenerationviaselfconditionedgans |