Generative Adversarial Networks With Attention Mechanisms at Every Scale
Existing works in image synthesis have shown the efficiency of applying attention mechanisms in generating natural-looking images. Despite the great informativeness, current works utilize such mechanisms at a certain scale of generative and discriminative networks. Intuitively, the increased use of...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9650851/ |
_version_ | 1818745349895880704 |
---|---|
author | Farkhod Makhmudkhujaev In Kyu Park |
author_facet | Farkhod Makhmudkhujaev In Kyu Park |
author_sort | Farkhod Makhmudkhujaev |
collection | DOAJ |
description | Existing works in image synthesis have shown the efficiency of applying attention mechanisms in generating natural-looking images. Despite the great informativeness, current works utilize such mechanisms at a certain scale of generative and discriminative networks. Intuitively, the increased use of attention should lead to better performance. However, due to memory constraints, even moving a single attention mechanism to a higher scale of the network is infeasible. Motivated by the importance of attention in image generation, we tackle this limitation by proposing a generative adversarial network-based framework that readily incorporates attention mechanisms at every scale of its networks. A straightforward structure of attention mechanism enables direct plugging in a scale-wise manner and trains jointly with adversarial networks. As a result, networks are forced to focus on relevant regions of feature maps learned at every scale, thus improving their own image representation power. In addition, we exploit and show the usage of multiscale attention features as a complementary feature set in discriminator training. We demonstrate qualitatively and quantitatively that the introduction of scale-wise attention mechanisms benefits competitive networks, thus improving the performance compared with those of current works. |
first_indexed | 2024-12-18T02:58:48Z |
format | Article |
id | doaj.art-4b11224c2ed0445280b124077d7c5161 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-18T02:58:48Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-4b11224c2ed0445280b124077d7c51612022-12-21T21:23:18ZengIEEEIEEE Access2169-35362021-01-01916840416841410.1109/ACCESS.2021.31356379650851Generative Adversarial Networks With Attention Mechanisms at Every ScaleFarkhod Makhmudkhujaev0https://orcid.org/0000-0003-2594-8327In Kyu Park1https://orcid.org/0000-0003-4774-7841Department of Information and Communication Engineering, Inha University, Incheon, South KoreaDepartment of Information and Communication Engineering, Inha University, Incheon, South KoreaExisting works in image synthesis have shown the efficiency of applying attention mechanisms in generating natural-looking images. Despite the great informativeness, current works utilize such mechanisms at a certain scale of generative and discriminative networks. Intuitively, the increased use of attention should lead to better performance. However, due to memory constraints, even moving a single attention mechanism to a higher scale of the network is infeasible. Motivated by the importance of attention in image generation, we tackle this limitation by proposing a generative adversarial network-based framework that readily incorporates attention mechanisms at every scale of its networks. A straightforward structure of attention mechanism enables direct plugging in a scale-wise manner and trains jointly with adversarial networks. As a result, networks are forced to focus on relevant regions of feature maps learned at every scale, thus improving their own image representation power. In addition, we exploit and show the usage of multiscale attention features as a complementary feature set in discriminator training. We demonstrate qualitatively and quantitatively that the introduction of scale-wise attention mechanisms benefits competitive networks, thus improving the performance compared with those of current works.https://ieeexplore.ieee.org/document/9650851/Image synthesisgenerative adversarial networksattentionmultiscale |
spellingShingle | Farkhod Makhmudkhujaev In Kyu Park Generative Adversarial Networks With Attention Mechanisms at Every Scale IEEE Access Image synthesis generative adversarial networks attention multiscale |
title | Generative Adversarial Networks With Attention Mechanisms at Every Scale |
title_full | Generative Adversarial Networks With Attention Mechanisms at Every Scale |
title_fullStr | Generative Adversarial Networks With Attention Mechanisms at Every Scale |
title_full_unstemmed | Generative Adversarial Networks With Attention Mechanisms at Every Scale |
title_short | Generative Adversarial Networks With Attention Mechanisms at Every Scale |
title_sort | generative adversarial networks with attention mechanisms at every scale |
topic | Image synthesis generative adversarial networks attention multiscale |
url | https://ieeexplore.ieee.org/document/9650851/ |
work_keys_str_mv | AT farkhodmakhmudkhujaev generativeadversarialnetworkswithattentionmechanismsateveryscale AT inkyupark generativeadversarialnetworkswithattentionmechanismsateveryscale |