POCE: pose-controllable expression editing

Facial expression editing has attracted increasing attention with the advance of deep neural networks in recent years. However, most existing methods suffer from compromised editing fidelity and limited usability as they either ignore pose variations (unrealistic editing) or require paired training...

Full description

Bibliographic Details
Main Authors: Wu, Rongliang, Yu, Yingchen, Zhan, Fangneng, Zhang, Jiahui, Liao, Shengcai, Lu, Shijian
Other Authors: School of Computer Science and Engineering
Format: Journal Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173502
_version_ 1826126840273567744
author Wu, Rongliang
Yu, Yingchen
Zhan, Fangneng
Zhang, Jiahui
Liao, Shengcai
Lu, Shijian
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Wu, Rongliang
Yu, Yingchen
Zhan, Fangneng
Zhang, Jiahui
Liao, Shengcai
Lu, Shijian
author_sort Wu, Rongliang
collection NTU
description Facial expression editing has attracted increasing attention with the advance of deep neural networks in recent years. However, most existing methods suffer from compromised editing fidelity and limited usability as they either ignore pose variations (unrealistic editing) or require paired training data (not easy to collect) for pose controls. This paper presents POCE, an innovative pose-controllable expression editing network that can generate realistic facial expressions and head poses simultaneously with just unpaired training images. POCE achieves the more accessible and realistic pose-controllable expression editing by mapping face images into UV space, where facial expressions and head poses can be disentangled and edited separately. POCE has two novel designs. The first is self-supervised UV completion that allows to complete UV maps sampled under different head poses, which often suffer from self-occlusions and missing facial texture. The second is weakly-supervised UV editing that allows to generate new facial expressions with minimal modification of facial identity, where the synthesized expression could be controlled by either an expression label or directly transplanted from a reference UV map via feature transfer. Extensive experiments show that POCE can learn from unpaired face images effectively, and the learned model can generate realistic and high-fidelity facial expressions under various new poses.
first_indexed 2024-10-01T06:58:54Z
format Journal Article
id ntu-10356/173502
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:58:54Z
publishDate 2024
record_format dspace
spelling ntu-10356/1735022024-02-07T06:49:32Z POCE: pose-controllable expression editing Wu, Rongliang Yu, Yingchen Zhan, Fangneng Zhang, Jiahui Liao, Shengcai Lu, Shijian School of Computer Science and Engineering Computer and Information Science Facial Expression Editing Image Synthesis Facial expression editing has attracted increasing attention with the advance of deep neural networks in recent years. However, most existing methods suffer from compromised editing fidelity and limited usability as they either ignore pose variations (unrealistic editing) or require paired training data (not easy to collect) for pose controls. This paper presents POCE, an innovative pose-controllable expression editing network that can generate realistic facial expressions and head poses simultaneously with just unpaired training images. POCE achieves the more accessible and realistic pose-controllable expression editing by mapping face images into UV space, where facial expressions and head poses can be disentangled and edited separately. POCE has two novel designs. The first is self-supervised UV completion that allows to complete UV maps sampled under different head poses, which often suffer from self-occlusions and missing facial texture. The second is weakly-supervised UV editing that allows to generate new facial expressions with minimal modification of facial identity, where the synthesized expression could be controlled by either an expression label or directly transplanted from a reference UV map via feature transfer. Extensive experiments show that POCE can learn from unpaired face images effectively, and the learned model can generate realistic and high-fidelity facial expressions under various new poses. Ministry of Education (MOE) This work was supported by the Ministry of Education, Singapore, under the Tier-1 Project RG94/20 and the Tier-2 Project MOE-T2EP20220-0003. 2024-02-07T06:49:32Z 2024-02-07T06:49:32Z 2023 Journal Article Wu, R., Yu, Y., Zhan, F., Zhang, J., Liao, S. & Lu, S. (2023). POCE: pose-controllable expression editing. IEEE Transactions On Image Processing, 32, 6210-6222. https://dx.doi.org/10.1109/TIP.2023.3329358 1057-7149 https://hdl.handle.net/10356/173502 10.1109/TIP.2023.3329358 37943638 2-s2.0-85177086190 32 6210 6222 en RG94/20 MOE-T2EP20220-0003 IEEE Transactions on Image Processing © 2023 IEEE. All rights reserved.
spellingShingle Computer and Information Science
Facial Expression Editing
Image Synthesis
Wu, Rongliang
Yu, Yingchen
Zhan, Fangneng
Zhang, Jiahui
Liao, Shengcai
Lu, Shijian
POCE: pose-controllable expression editing
title POCE: pose-controllable expression editing
title_full POCE: pose-controllable expression editing
title_fullStr POCE: pose-controllable expression editing
title_full_unstemmed POCE: pose-controllable expression editing
title_short POCE: pose-controllable expression editing
title_sort poce pose controllable expression editing
topic Computer and Information Science
Facial Expression Editing
Image Synthesis
url https://hdl.handle.net/10356/173502
work_keys_str_mv AT wurongliang poceposecontrollableexpressionediting
AT yuyingchen poceposecontrollableexpressionediting
AT zhanfangneng poceposecontrollableexpressionediting
AT zhangjiahui poceposecontrollableexpressionediting
AT liaoshengcai poceposecontrollableexpressionediting
AT lushijian poceposecontrollableexpressionediting