A Study on the Super Resolution Combining Spatial Attention and Channel Attention

Existing CNN-based super resolution methods have low emphasis on high-frequency features, resulting in poor performance for contours and textures. To solve this problem, this paper proposes single image super resolution using an attention mechanism that emphasizes high-frequency features and a featu...

Full description

Bibliographic Details
Main Authors: Dongwoo Lee, Kyeongseok Jang, Soo Young Cho, Seunghyun Lee, Kwangchul Son
Format: Article
Language:English
Published: MDPI AG 2023-03-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/13/6/3408
_version_ 1797613693331046400
author Dongwoo Lee
Kyeongseok Jang
Soo Young Cho
Seunghyun Lee
Kwangchul Son
author_facet Dongwoo Lee
Kyeongseok Jang
Soo Young Cho
Seunghyun Lee
Kwangchul Son
author_sort Dongwoo Lee
collection DOAJ
description Existing CNN-based super resolution methods have low emphasis on high-frequency features, resulting in poor performance for contours and textures. To solve this problem, this paper proposes single image super resolution using an attention mechanism that emphasizes high-frequency features and a feature extraction process with different depths. In order to emphasize the high-frequency features of the channel and space, it is composed of CSBlock that combines channel attention and spatial attention. Attention block using 10 CSBlocks was used for high-frequency feature extraction. In order to extract various features with different degrees of feature emphasis from insufficient low-resolution features, features were extracted from structures connected with different numbers of attention blocks. The extracted features were expanded through sub-pixel convolution to create super resolution images, and learning was performed through <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mi>L</mi><mn>1</mn></msub></mrow></semantics></math></inline-formula> loss. Compared to the existing deep learning method, it showed improved results in several high-frequency features such as small object outlines and line patterns. In PSNR and SSIM, it showed about 11% to 26% improvement over the existing Bicubic interpolation and about 1 to 2% improvement over VDSR and EDSR.
first_indexed 2024-03-11T06:59:22Z
format Article
id doaj.art-3e8abf68a45841c78a4eedc0783f8d31
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-03-11T06:59:22Z
publishDate 2023-03-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-3e8abf68a45841c78a4eedc0783f8d312023-11-17T09:21:15ZengMDPI AGApplied Sciences2076-34172023-03-01136340810.3390/app13063408A Study on the Super Resolution Combining Spatial Attention and Channel AttentionDongwoo Lee0Kyeongseok Jang1Soo Young Cho2Seunghyun Lee3Kwangchul Son4Department of Plasma Bio Display, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of KoreaDepartment of Plasma Bio Display, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of KoreaDepartment of Information Contents, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of KoreaIngenium College, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of KoreaDepartment of Information Contents, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of KoreaExisting CNN-based super resolution methods have low emphasis on high-frequency features, resulting in poor performance for contours and textures. To solve this problem, this paper proposes single image super resolution using an attention mechanism that emphasizes high-frequency features and a feature extraction process with different depths. In order to emphasize the high-frequency features of the channel and space, it is composed of CSBlock that combines channel attention and spatial attention. Attention block using 10 CSBlocks was used for high-frequency feature extraction. In order to extract various features with different degrees of feature emphasis from insufficient low-resolution features, features were extracted from structures connected with different numbers of attention blocks. The extracted features were expanded through sub-pixel convolution to create super resolution images, and learning was performed through <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mi>L</mi><mn>1</mn></msub></mrow></semantics></math></inline-formula> loss. Compared to the existing deep learning method, it showed improved results in several high-frequency features such as small object outlines and line patterns. In PSNR and SSIM, it showed about 11% to 26% improvement over the existing Bicubic interpolation and about 1 to 2% improvement over VDSR and EDSR.https://www.mdpi.com/2076-3417/13/6/3408super resolutionchannel attentionspatial attentionparallel structure
spellingShingle Dongwoo Lee
Kyeongseok Jang
Soo Young Cho
Seunghyun Lee
Kwangchul Son
A Study on the Super Resolution Combining Spatial Attention and Channel Attention
Applied Sciences
super resolution
channel attention
spatial attention
parallel structure
title A Study on the Super Resolution Combining Spatial Attention and Channel Attention
title_full A Study on the Super Resolution Combining Spatial Attention and Channel Attention
title_fullStr A Study on the Super Resolution Combining Spatial Attention and Channel Attention
title_full_unstemmed A Study on the Super Resolution Combining Spatial Attention and Channel Attention
title_short A Study on the Super Resolution Combining Spatial Attention and Channel Attention
title_sort study on the super resolution combining spatial attention and channel attention
topic super resolution
channel attention
spatial attention
parallel structure
url https://www.mdpi.com/2076-3417/13/6/3408
work_keys_str_mv AT dongwoolee astudyonthesuperresolutioncombiningspatialattentionandchannelattention
AT kyeongseokjang astudyonthesuperresolutioncombiningspatialattentionandchannelattention
AT sooyoungcho astudyonthesuperresolutioncombiningspatialattentionandchannelattention
AT seunghyunlee astudyonthesuperresolutioncombiningspatialattentionandchannelattention
AT kwangchulson astudyonthesuperresolutioncombiningspatialattentionandchannelattention
AT dongwoolee studyonthesuperresolutioncombiningspatialattentionandchannelattention
AT kyeongseokjang studyonthesuperresolutioncombiningspatialattentionandchannelattention
AT sooyoungcho studyonthesuperresolutioncombiningspatialattentionandchannelattention
AT seunghyunlee studyonthesuperresolutioncombiningspatialattentionandchannelattention
AT kwangchulson studyonthesuperresolutioncombiningspatialattentionandchannelattention