Generative Adversarial Network for Image Super-Resolution Combining Texture Loss

Objective: Super-resolution reconstruction is an increasingly important area in computer vision. To alleviate the problems that super-resolution reconstruction models based on generative adversarial networks are difficult to train and contain artifacts in reconstruction results, we propose a novel a...

Full description

Bibliographic Details
Main Authors: Yuning Jiang, Jinhua Li
Format: Article
Language:English
Published: MDPI AG 2020-03-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/10/5/1729
_version_ 1819263123307102208
author Yuning Jiang
Jinhua Li
author_facet Yuning Jiang
Jinhua Li
author_sort Yuning Jiang
collection DOAJ
description Objective: Super-resolution reconstruction is an increasingly important area in computer vision. To alleviate the problems that super-resolution reconstruction models based on generative adversarial networks are difficult to train and contain artifacts in reconstruction results, we propose a novel and improved algorithm. Methods: This paper presented TSRGAN (Super-Resolution Generative Adversarial Networks Combining Texture Loss) model which was also based on generative adversarial networks. We redefined the generator network and discriminator network. Firstly, on the network structure, residual dense blocks without excess batch normalization layers were used to form generator network. Visual Geometry Group (VGG)19 network was adopted as the basic framework of discriminator network. Secondly, in the loss function, the weighting of the four loss functions of texture loss, perceptual loss, adversarial loss and content loss was used as the objective function of generator. Texture loss was proposed to encourage local information matching. Perceptual loss was enhanced by employing the features before activation layer to calculate. Adversarial loss was optimized based on WGAN-GP (Wasserstein GAN with Gradient Penalty) theory. Content loss was used to ensure the accuracy of low-frequency information. During the optimization process, the target image information was reconstructed from different angles of high and low frequencies. Results: The experimental results showed that our method made the average Peak Signal to Noise Ratio of reconstructed images reach 27.99 dB and the average Structural Similarity Index reach 0.778 without losing too much speed, which was superior to other comparison algorithms in objective evaluation index. What is more, TSRGAN significantly improved subjective visual evaluations such as brightness information and texture details. We found that it could generate images with more realistic textures and more accurate brightness, which were more in line with human visual evaluation. Conclusions: Our improvements to the network structure could reduce the model’s calculation amount and stabilize the training direction. In addition, the loss function we present for generator could provide stronger supervision for restoring realistic textures and achieving brightness consistency. Experimental results prove the effectiveness and superiority of TSRGAN algorithm.
first_indexed 2024-12-23T20:08:35Z
format Article
id doaj.art-966fe4b0040146f9b574ed0e22d86f9e
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-12-23T20:08:35Z
publishDate 2020-03-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-966fe4b0040146f9b574ed0e22d86f9e2022-12-21T17:32:52ZengMDPI AGApplied Sciences2076-34172020-03-01105172910.3390/app10051729app10051729Generative Adversarial Network for Image Super-Resolution Combining Texture LossYuning Jiang0Jinhua Li1College of Data Science and Software Engineering, Qingdao University, Qingdao 266071, ChinaCollege of Data Science and Software Engineering, Qingdao University, Qingdao 266071, ChinaObjective: Super-resolution reconstruction is an increasingly important area in computer vision. To alleviate the problems that super-resolution reconstruction models based on generative adversarial networks are difficult to train and contain artifacts in reconstruction results, we propose a novel and improved algorithm. Methods: This paper presented TSRGAN (Super-Resolution Generative Adversarial Networks Combining Texture Loss) model which was also based on generative adversarial networks. We redefined the generator network and discriminator network. Firstly, on the network structure, residual dense blocks without excess batch normalization layers were used to form generator network. Visual Geometry Group (VGG)19 network was adopted as the basic framework of discriminator network. Secondly, in the loss function, the weighting of the four loss functions of texture loss, perceptual loss, adversarial loss and content loss was used as the objective function of generator. Texture loss was proposed to encourage local information matching. Perceptual loss was enhanced by employing the features before activation layer to calculate. Adversarial loss was optimized based on WGAN-GP (Wasserstein GAN with Gradient Penalty) theory. Content loss was used to ensure the accuracy of low-frequency information. During the optimization process, the target image information was reconstructed from different angles of high and low frequencies. Results: The experimental results showed that our method made the average Peak Signal to Noise Ratio of reconstructed images reach 27.99 dB and the average Structural Similarity Index reach 0.778 without losing too much speed, which was superior to other comparison algorithms in objective evaluation index. What is more, TSRGAN significantly improved subjective visual evaluations such as brightness information and texture details. We found that it could generate images with more realistic textures and more accurate brightness, which were more in line with human visual evaluation. Conclusions: Our improvements to the network structure could reduce the model’s calculation amount and stabilize the training direction. In addition, the loss function we present for generator could provide stronger supervision for restoring realistic textures and achieving brightness consistency. Experimental results prove the effectiveness and superiority of TSRGAN algorithm.https://www.mdpi.com/2076-3417/10/5/1729super-resolution reconstructiongenerative adversarial networksdense convolutional networkstexture losswgan-gp
spellingShingle Yuning Jiang
Jinhua Li
Generative Adversarial Network for Image Super-Resolution Combining Texture Loss
Applied Sciences
super-resolution reconstruction
generative adversarial networks
dense convolutional networks
texture loss
wgan-gp
title Generative Adversarial Network for Image Super-Resolution Combining Texture Loss
title_full Generative Adversarial Network for Image Super-Resolution Combining Texture Loss
title_fullStr Generative Adversarial Network for Image Super-Resolution Combining Texture Loss
title_full_unstemmed Generative Adversarial Network for Image Super-Resolution Combining Texture Loss
title_short Generative Adversarial Network for Image Super-Resolution Combining Texture Loss
title_sort generative adversarial network for image super resolution combining texture loss
topic super-resolution reconstruction
generative adversarial networks
dense convolutional networks
texture loss
wgan-gp
url https://www.mdpi.com/2076-3417/10/5/1729
work_keys_str_mv AT yuningjiang generativeadversarialnetworkforimagesuperresolutioncombiningtextureloss
AT jinhuali generativeadversarialnetworkforimagesuperresolutioncombiningtextureloss