Summary: | At present, deep residual network has been widely used in image super-resolution and proved to be able to achieve good reconstruction results. However, the existing super-resolution algorithms based on deep residual network have the problems of indiscriminately learning feature information of different regions and low utilization rate of feature information, which make them difficult to further improve the reconstruction effect. In view of the above problems, a novel super-resolution reconstruction network based on residual attention and multi-scale feature fusion (RAMF) is proposed in this paper. Firstly, a lightweight multi-scale residual module (LMRM) is proposed in the deep feature extraction stage, by which the multi-scale features are extracted and further cross-connected to enrich the information of different receptive fields. Then, to fully improve the utilization rate of feature information, a dense feature fusion structure is designed to fuse the output feature of each LMRM. Finally, a residual spatial attention module (RSAM) is proposed to specifically learn and better retain high-frequency feature information, so as to improve the reconstruction effect. Experimental tests and comparisons are conducted with the current advanced methods on four baseline databases, and the results demonstrate that the proposed RAMF can achieve better reconstruction effect with fewer parameters, low computational complexity, fast processing speed and high objective evaluation index. Especially, the peak signal-to-noise ratio measured on Urban100 data set increases by 0.13dB on average, and the reconstructed image has better visual effect and richer texture detail features.
|