Degradation learning and Skip-Transformer for blind face restoration
Blindrestoration of low-quality faces in the real world has advanced rapidly in recent years. The rich and diverse priors encapsulated by pre-trained face GAN have demonstrated their effectiveness in reconstructing high-quality faces from low-quality observations in the real world. However, the mode...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-05-01
|
Series: | Frontiers in Signal Processing |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frsip.2023.1106465/full |
_version_ | 1797835358772133888 |
---|---|
author | Ahmed Cheikh Sidiya Xuan Xu Ning Xu Xin Li |
author_facet | Ahmed Cheikh Sidiya Xuan Xu Ning Xu Xin Li |
author_sort | Ahmed Cheikh Sidiya |
collection | DOAJ |
description | Blindrestoration of low-quality faces in the real world has advanced rapidly in recent years. The rich and diverse priors encapsulated by pre-trained face GAN have demonstrated their effectiveness in reconstructing high-quality faces from low-quality observations in the real world. However, the modeling of degradation in real-world face images remains poorly understood, affecting the property of generalization of existing methods. Inspired by the success of pre-trained models and transformers in recent years, we propose to solve the problem of blind restoration by jointly exploiting their power for degradation and prior learning, respectively. On the one hand, we train a two-generator architecture for degradation learning to transfer the style of low-quality real-world faces to the high-resolution output of pre-trained StyleGAN. On the other hand, we present a hybrid architecture, called Skip-Transformer (ST), which combines transformer encoder modules with a pre-trained StyleGAN-based decoder using skip layers. Such a hybrid design is innovative in that it represents the first attempt to jointly exploit the global attention mechanism of the transformer and pre-trained StyleGAN-based generative facial priors. We have compared our DL-ST model with the latest three benchmarks for blind image restoration (DFDNet, PSFRGAN, and GFP-GAN). Our experimental results have shown that this work outperforms all other competing methods, both subjectively and objectively (as measured by the Fréchet Inception Distance and NIQE metrics). |
first_indexed | 2024-04-09T14:51:45Z |
format | Article |
id | doaj.art-2106d2a28ac64d3cbdcdb21efa57ef8b |
institution | Directory Open Access Journal |
issn | 2673-8198 |
language | English |
last_indexed | 2024-04-09T14:51:45Z |
publishDate | 2023-05-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Signal Processing |
spelling | doaj.art-2106d2a28ac64d3cbdcdb21efa57ef8b2023-05-02T09:22:01ZengFrontiers Media S.A.Frontiers in Signal Processing2673-81982023-05-01310.3389/frsip.2023.11064651106465Degradation learning and Skip-Transformer for blind face restorationAhmed Cheikh Sidiya0Xuan Xu1Ning Xu2Xin Li3West Virginia University, Lane Department of Computer Science and Electrical Engineering, Morgantown, United StatesKwai Inc, Palo Alto, CA, United StatesKwai Inc, Palo Alto, CA, United StatesWest Virginia University, Lane Department of Computer Science and Electrical Engineering, Morgantown, United StatesBlindrestoration of low-quality faces in the real world has advanced rapidly in recent years. The rich and diverse priors encapsulated by pre-trained face GAN have demonstrated their effectiveness in reconstructing high-quality faces from low-quality observations in the real world. However, the modeling of degradation in real-world face images remains poorly understood, affecting the property of generalization of existing methods. Inspired by the success of pre-trained models and transformers in recent years, we propose to solve the problem of blind restoration by jointly exploiting their power for degradation and prior learning, respectively. On the one hand, we train a two-generator architecture for degradation learning to transfer the style of low-quality real-world faces to the high-resolution output of pre-trained StyleGAN. On the other hand, we present a hybrid architecture, called Skip-Transformer (ST), which combines transformer encoder modules with a pre-trained StyleGAN-based decoder using skip layers. Such a hybrid design is innovative in that it represents the first attempt to jointly exploit the global attention mechanism of the transformer and pre-trained StyleGAN-based generative facial priors. We have compared our DL-ST model with the latest three benchmarks for blind image restoration (DFDNet, PSFRGAN, and GFP-GAN). Our experimental results have shown that this work outperforms all other competing methods, both subjectively and objectively (as measured by the Fréchet Inception Distance and NIQE metrics).https://www.frontiersin.org/articles/10.3389/frsip.2023.1106465/fullblind face restorationdegradation learning (DL)Skip-Transformer (ST)hybrid architecture designface in the wild |
spellingShingle | Ahmed Cheikh Sidiya Xuan Xu Ning Xu Xin Li Degradation learning and Skip-Transformer for blind face restoration Frontiers in Signal Processing blind face restoration degradation learning (DL) Skip-Transformer (ST) hybrid architecture design face in the wild |
title | Degradation learning and Skip-Transformer for blind face restoration |
title_full | Degradation learning and Skip-Transformer for blind face restoration |
title_fullStr | Degradation learning and Skip-Transformer for blind face restoration |
title_full_unstemmed | Degradation learning and Skip-Transformer for blind face restoration |
title_short | Degradation learning and Skip-Transformer for blind face restoration |
title_sort | degradation learning and skip transformer for blind face restoration |
topic | blind face restoration degradation learning (DL) Skip-Transformer (ST) hybrid architecture design face in the wild |
url | https://www.frontiersin.org/articles/10.3389/frsip.2023.1106465/full |
work_keys_str_mv | AT ahmedcheikhsidiya degradationlearningandskiptransformerforblindfacerestoration AT xuanxu degradationlearningandskiptransformerforblindfacerestoration AT ningxu degradationlearningandskiptransformerforblindfacerestoration AT xinli degradationlearningandskiptransformerforblindfacerestoration |