Anisotropic Weighted Total Variation Feature Fusion Network for Remote Sensing Image Denoising

Remote sensing images are widely applied in instance segmentation and objetive recognition; however, they often suffer from noise, influencing the performance of subsequent applications. Previous image denoising works have only obtained restored images without preserving detailed texture. To address...

Full description

Bibliographic Details
Main Authors: Huiqing Qi, Shengli Tan, Zhichao Li
Format: Article
Language:English
Published: MDPI AG 2022-12-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/14/24/6300
Description
Summary:Remote sensing images are widely applied in instance segmentation and objetive recognition; however, they often suffer from noise, influencing the performance of subsequent applications. Previous image denoising works have only obtained restored images without preserving detailed texture. To address this issue, we proposed a novel model for remote sensing image denoising, called the anisotropic weighted total variation feature fusion network (AWTV<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>F</mi><mn>2</mn></msup></semantics></math></inline-formula>Net), consisting of four novel modules (WTV-Net, SOSB, AuEncoder, and FB). AWTV<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>F</mi><mn>2</mn></msup></semantics></math></inline-formula>Net combines traditional total variation with a deep neural network, improving the denoising ability of the proposed approach. Our proposed method is evaluated by PSNR and SSIM metrics on three benchmark datasets (NWPU, PatternNet, UCL), and the experimental results show that AWTV<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>F</mi><mn>2</mn></msup></semantics></math></inline-formula>Net can obtain 0.12∼19.39 dB/0.0237∼0.5362 higher on PSNR/SSIM values in the Gaussian noise removal and mixed noise removal tasks than State-of-The-Art (SoTA) algorithms. Meanwhile, our model can preserve more detailed texture features. The SSEQ, BLIINDS-II, and BRISQUE values of AWTV<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>F</mi><mn>2</mn></msup></semantics></math></inline-formula>Net on the three real-world datasets (AVRIS Indian Pines, ROSIS University of Pavia, HYDICE Urban) are 3.94∼12.92 higher, 8.33∼27.5 higher, and 2.2∼5.55 lower than those of the compared methods, respectively. The proposed framework can guide subsequent remote sensing image applications, regarding the pre-processing of input images.
ISSN:2072-4292