Recurrently-Trained Super-Resolution

We are motivated by the observation that for problems where inputs and outputs are in the same form such as in image enhancement, deep neural networks can be reinforced by retraining the network using a new target set to the output for the original target. As an example, we introduce a new learning...

Full description

Bibliographic Details
Main Authors: Saem Park, Nojun Kwak
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9343815/
_version_ 1818727597428703232
author Saem Park
Nojun Kwak
author_facet Saem Park
Nojun Kwak
author_sort Saem Park
collection DOAJ
description We are motivated by the observation that for problems where inputs and outputs are in the same form such as in image enhancement, deep neural networks can be reinforced by retraining the network using a new target set to the output for the original target. As an example, we introduce a new learning strategy for super-resolution by recurrently training the same simple network. Unlike the existing self-trained SR, which involves a single stage of learning with multiple runs at test time, our method trains the same SR network multiple times with increasingly better targets requiring only a single inference at test time. At each stage of the proposed learning scheme, a new target for training is obtained by applying the most recently trained SR network to the original image and downscaling the resultant SR image to normalize the size. Even if downscaling is involved, we argue that the downscaled SR image acts as a better target compared to the old target. We could mathematically demonstrate that this process is similar to unsharp masking when it is linearly approximated and that this process makes the image sharper. However, unlike unsharp masking, the proposed recurrent learning tends to converge to a specific target. By retraining the existing network aiming at a more enhanced target, the proposed method can achieve a similar effect of applying SR multiple times without increasing implementation cost and inference time. To objectively verify the supremacy of our approach by experiments, we propose to use VIQET MOS, which does not require a reference image as a measure of image quality. As far as we know, our work of using an objective quality measure in image enhancement is the first one whose validity was verified by showing similar results to the actual user's subjective evaluation. The proposed recurrent learning scheme makes existing SR algorithms more useful by clearly improving the effect of SR. Codes are available at https://github.com/rtsr82/rtsr.git.
first_indexed 2024-12-17T22:16:38Z
format Article
id doaj.art-308f6f7203f84c7498c3d2054e64eec1
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-17T22:16:38Z
publishDate 2021-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-308f6f7203f84c7498c3d2054e64eec12022-12-21T21:30:35ZengIEEEIEEE Access2169-35362021-01-019231912320110.1109/ACCESS.2021.30560619343815Recurrently-Trained Super-ResolutionSaem Park0https://orcid.org/0000-0002-9727-4272Nojun Kwak1https://orcid.org/0000-0002-1792-0327Department of Intelligence and Information, Seoul National University, Seoul, Republic of KoreaDepartment of Intelligence and Information, Seoul National University, Seoul, Republic of KoreaWe are motivated by the observation that for problems where inputs and outputs are in the same form such as in image enhancement, deep neural networks can be reinforced by retraining the network using a new target set to the output for the original target. As an example, we introduce a new learning strategy for super-resolution by recurrently training the same simple network. Unlike the existing self-trained SR, which involves a single stage of learning with multiple runs at test time, our method trains the same SR network multiple times with increasingly better targets requiring only a single inference at test time. At each stage of the proposed learning scheme, a new target for training is obtained by applying the most recently trained SR network to the original image and downscaling the resultant SR image to normalize the size. Even if downscaling is involved, we argue that the downscaled SR image acts as a better target compared to the old target. We could mathematically demonstrate that this process is similar to unsharp masking when it is linearly approximated and that this process makes the image sharper. However, unlike unsharp masking, the proposed recurrent learning tends to converge to a specific target. By retraining the existing network aiming at a more enhanced target, the proposed method can achieve a similar effect of applying SR multiple times without increasing implementation cost and inference time. To objectively verify the supremacy of our approach by experiments, we propose to use VIQET MOS, which does not require a reference image as a measure of image quality. As far as we know, our work of using an objective quality measure in image enhancement is the first one whose validity was verified by showing similar results to the actual user's subjective evaluation. The proposed recurrent learning scheme makes existing SR algorithms more useful by clearly improving the effect of SR. Codes are available at https://github.com/rtsr82/rtsr.git.https://ieeexplore.ieee.org/document/9343815/Network reinforcementsuper resolutionimage enhancementrecurrent training
spellingShingle Saem Park
Nojun Kwak
Recurrently-Trained Super-Resolution
IEEE Access
Network reinforcement
super resolution
image enhancement
recurrent training
title Recurrently-Trained Super-Resolution
title_full Recurrently-Trained Super-Resolution
title_fullStr Recurrently-Trained Super-Resolution
title_full_unstemmed Recurrently-Trained Super-Resolution
title_short Recurrently-Trained Super-Resolution
title_sort recurrently trained super resolution
topic Network reinforcement
super resolution
image enhancement
recurrent training
url https://ieeexplore.ieee.org/document/9343815/
work_keys_str_mv AT saempark recurrentlytrainedsuperresolution
AT nojunkwak recurrentlytrainedsuperresolution