Pose-Aware Disentangled Multiscale Transformer for Pose Guided Person Image Generation

Pose Guided Person Image Generation (PGPIG) is a task that involves generating an image of a person in a target pose, given an image in a source pose, source pose information and target pose information. Many of the existing PGPIG techniques need extra pose-related data or tasks, which cause the lim...

Full description

Bibliographic Details
Main Authors: Kei Shibasaki, Masaaki Ikehara
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10373854/
Description
Summary:Pose Guided Person Image Generation (PGPIG) is a task that involves generating an image of a person in a target pose, given an image in a source pose, source pose information and target pose information. Many of the existing PGPIG techniques need extra pose-related data or tasks, which cause the limitations on their applicability. In addition CNNs are used as the feature extractor which do not have long-range dependency. However, CNNs can only extract features from neighboring pixels and cannot consider image consistency. This paper introduces a PGPIG network that solves these challenges by incorporating modules that use Axial Transformers with wide receptive fields. The proposed approach disentangles the PGPIG task into two subtasks: “rough pose transformation” and “detailed texture generation.” In “rough pose transformation,” lower-resolution feature maps is processed by Axial Transformer-based blocks. These blocks employ an Encoder-Decoder structure, which allows the network to use the pose information well and improves the stability and performance of the training. The latter subtask employs a CNN network with Adaptive Instance Normalization. Experimental results show the competitive performance of the proposed method compared to existing methods. The proposed method achieves lowest LPIPS in Deep Fashion dataset and FID in Market-1501 dataset. Remarkably, despite the great results obtained, the number of parameters of the proposed network is significantly less in contrast to existing methods.
ISSN:2169-3536