Style-Content-Aware Adaptive Normalization Based Pose Guided for Person Image Synthesis

Most of the tasks based on pose-guided person image synthesis have obtained accurate target pose, but still have not obtained reasonable style texture mapping. In this paper, we propose a new two-stage network to decouple style and content, which aims to enhance the accuracy of pose transfer and the...

Full description

Bibliographic Details
Main Authors: Wei Wei, Xia Yang, Xiaodong Duan, Chen Guo
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10164073/
Description
Summary:Most of the tasks based on pose-guided person image synthesis have obtained accurate target pose, but still have not obtained reasonable style texture mapping. In this paper, we propose a new two-stage network to decouple style and content, which aims to enhance the accuracy of pose transfer and the realism of a person appearance. Firstly, we propose an Aligned Multi-scale Content Transfer Network(AMSNet) to predict the target edge map for pose content transfer in advance, which can not only preserve clearer texture content but also alleviate spatial misalignment through advancing to transfer pose information. Secondly, we propose a new Style Texture Transfer Network(STNet) to gradually transfer the source style features to the target pose to for reasonable distribution of styles. To achieve highly similar appearance texture to the source style, we use a style-content-aware adaptive normalization method. The source style features are mapped into the same latent space as aligned content images (target pose and edge), and consistency between style texture and content is enhanced through adaptive adjustment of source style and target pose. Experimental results show that the proposed model can synthesize target images consistent with the source style, achieving superior results both quantitatively and qualitatively.
ISSN:2169-3536