Generative Adversarial Networks With Attention Mechanisms at Every Scale

Existing works in image synthesis have shown the efficiency of applying attention mechanisms in generating natural-looking images. Despite the great informativeness, current works utilize such mechanisms at a certain scale of generative and discriminative networks. Intuitively, the increased use of...

Full description

Bibliographic Details
Main Authors: Farkhod Makhmudkhujaev, In Kyu Park
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9650851/
Description
Summary:Existing works in image synthesis have shown the efficiency of applying attention mechanisms in generating natural-looking images. Despite the great informativeness, current works utilize such mechanisms at a certain scale of generative and discriminative networks. Intuitively, the increased use of attention should lead to better performance. However, due to memory constraints, even moving a single attention mechanism to a higher scale of the network is infeasible. Motivated by the importance of attention in image generation, we tackle this limitation by proposing a generative adversarial network-based framework that readily incorporates attention mechanisms at every scale of its networks. A straightforward structure of attention mechanism enables direct plugging in a scale-wise manner and trains jointly with adversarial networks. As a result, networks are forced to focus on relevant regions of feature maps learned at every scale, thus improving their own image representation power. In addition, we exploit and show the usage of multiscale attention features as a complementary feature set in discriminator training. We demonstrate qualitatively and quantitatively that the introduction of scale-wise attention mechanisms benefits competitive networks, thus improving the performance compared with those of current works.
ISSN:2169-3536