DANS: Deep Attention Network for Single Image Super-Resolution
The current advancements in image super-resolution have explored different attention mechanisms to achieve better quantitative and perceptual results. The critical challenge recently is to utilize the potential of attention mechanisms to reconstruct high-resolution images from their low-resolution c...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10210219/ |
_version_ | 1827865894395576320 |
---|---|
author | Jagrati Talreja Supavadee Aramvith Takao Onoye |
author_facet | Jagrati Talreja Supavadee Aramvith Takao Onoye |
author_sort | Jagrati Talreja |
collection | DOAJ |
description | The current advancements in image super-resolution have explored different attention mechanisms to achieve better quantitative and perceptual results. The critical challenge recently is to utilize the potential of attention mechanisms to reconstruct high-resolution images from their low-resolution counterparts. This research proposes a novel method that combines inception blocks, non-local sparse attention, and a U-Net network architecture. The network incorporates the non-local sparse attention on the backbone of symmetric encoder-decoder U-Net structure, which helps to identify long-range dependencies and exploits contextual information while preserving global context. By incorporating skip connections, the network can leverage features at different scales, enhancing the reconstruction of high-frequency information. Additionally, we introduce inception blocks allowing the model to capture information at various levels of abstraction to enhance multi-scale representation learning further. Experimental findings show that our suggested approach produces superior quantitative measurements, such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), visual information fidelity (VIF), and visually appealing high-resolution image reconstructions. |
first_indexed | 2024-03-12T14:55:45Z |
format | Article |
id | doaj.art-954005ac2a8b42e7acc1545cdc331143 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-12T14:55:45Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-954005ac2a8b42e7acc1545cdc3311432023-08-14T23:00:52ZengIEEEIEEE Access2169-35362023-01-0111843798439710.1109/ACCESS.2023.330269210210219DANS: Deep Attention Network for Single Image Super-ResolutionJagrati Talreja0https://orcid.org/0009-0009-4652-4196Supavadee Aramvith1https://orcid.org/0000-0001-9840-3171Takao Onoye2https://orcid.org/0000-0002-1894-2448Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, ThailandMultimedia Data Analytics and Processing Unit, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, ThailandGraduate School of Information Science and Technology, Osaka University, Suita, JapanThe current advancements in image super-resolution have explored different attention mechanisms to achieve better quantitative and perceptual results. The critical challenge recently is to utilize the potential of attention mechanisms to reconstruct high-resolution images from their low-resolution counterparts. This research proposes a novel method that combines inception blocks, non-local sparse attention, and a U-Net network architecture. The network incorporates the non-local sparse attention on the backbone of symmetric encoder-decoder U-Net structure, which helps to identify long-range dependencies and exploits contextual information while preserving global context. By incorporating skip connections, the network can leverage features at different scales, enhancing the reconstruction of high-frequency information. Additionally, we introduce inception blocks allowing the model to capture information at various levels of abstraction to enhance multi-scale representation learning further. Experimental findings show that our suggested approach produces superior quantitative measurements, such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), visual information fidelity (VIF), and visually appealing high-resolution image reconstructions.https://ieeexplore.ieee.org/document/10210219/Image super-resolutioninception blocksnon-local sparse attentionU-Net |
spellingShingle | Jagrati Talreja Supavadee Aramvith Takao Onoye DANS: Deep Attention Network for Single Image Super-Resolution IEEE Access Image super-resolution inception blocks non-local sparse attention U-Net |
title | DANS: Deep Attention Network for Single Image Super-Resolution |
title_full | DANS: Deep Attention Network for Single Image Super-Resolution |
title_fullStr | DANS: Deep Attention Network for Single Image Super-Resolution |
title_full_unstemmed | DANS: Deep Attention Network for Single Image Super-Resolution |
title_short | DANS: Deep Attention Network for Single Image Super-Resolution |
title_sort | dans deep attention network for single image super resolution |
topic | Image super-resolution inception blocks non-local sparse attention U-Net |
url | https://ieeexplore.ieee.org/document/10210219/ |
work_keys_str_mv | AT jagratitalreja dansdeepattentionnetworkforsingleimagesuperresolution AT supavadeearamvith dansdeepattentionnetworkforsingleimagesuperresolution AT takaoonoye dansdeepattentionnetworkforsingleimagesuperresolution |