Recurrent Large Kernel Attention Network for Efficient Single Infrared Image Super-Resolution

Infrared imaging has broad and important applications. However, the infrared detector manufacture technique limits the detector resolution and the resolution of infrared images. In this work, we design a Recurrent Large Kernel Attention Neural Network (RLKA-Net) for single infrared image super-resol...

Full description

Bibliographic Details
Main Authors: Gangping Liu, Shuaijun Zhou, Xiaxu Chen, Wenjie Yue, Jun Ke
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10366265/
Description
Summary:Infrared imaging has broad and important applications. However, the infrared detector manufacture technique limits the detector resolution and the resolution of infrared images. In this work, we design a Recurrent Large Kernel Attention Neural Network (RLKA-Net) for single infrared image super-resolution(SR), and then demonstrate its superior performance. Compared to other SR networks, RLKA-Net is a lightweight network capable of extracting spatial and temporal features from infrared images. To extract spatial features, we use multiple stacked Recurrent Learning Units (RLUs) to expand the network&#x2019;s receptive field, while the large kernel attention mechanism in RLUs is used to obtain attention maps at various granularity. To extract temporal features, RLKA-Net uses the recurrent learning strategy to keep persistent memory of extracted features, which contribute to more precise reconstruction results. Moreover, RLKA-Net employs an Attention Gate (AG) to reduce the number of parameters and expedite the training process. We demonstrate the efficacy of the Recurrent Learning Stages (RLS), Large Kernel Attention Block (LKAB), and Attention Gate mechanisms through ablation studies. We test RLKA-Net on several infrared image datasets. The experimental results demonstrate that RLKA-Net presents state-of-the-art performance compared to existing SR models. The code and models are available at <uri>https://github.com/ZedFm/</uri> RLKA-Net.
ISSN:2169-3536