Feature-Guided SAR-to-Optical Image Translation

The powerful performance of Generative Adversarial Networks (GANs) in image-to-image translation has been well demonstrated in recent years. However, most methods are focused on completing an isolated image translation task. With the complex scenes in optical images and high-frequency speckle noise...

Full description

Bibliographic Details
Main Authors: Jiexin Zhang, Jianjiang Zhou, Xiwen Lu
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9063491/
_version_ 1818853277021765632
author Jiexin Zhang
Jianjiang Zhou
Xiwen Lu
author_facet Jiexin Zhang
Jianjiang Zhou
Xiwen Lu
author_sort Jiexin Zhang
collection DOAJ
description The powerful performance of Generative Adversarial Networks (GANs) in image-to-image translation has been well demonstrated in recent years. However, most methods are focused on completing an isolated image translation task. With the complex scenes in optical images and high-frequency speckle noise in SAR images, the quality of generated images is often unsatisfactory. In this paper, a feature-guided method for SAR-to-optical image translation is proposed to better take the unique attributes of images into account. Specifically, in view of the diversity of structure features and texture features, VGG-19 network is used as the feature extractor in the task of cross-modal image translation. To ensure the acquisition of multilayer features in the process of image generation, feature matching is carried out on different layers. Loss function based on Discrete Cosine Transform is designed to filter out the high-frequency noise. The generated images show better performance in feature preservation and noise reduction, and achieve higher Image Quality Assessment scores compared with images generated by some famous methods. The superiority of our algorithm is also demonstrated by being applied to different networks.
first_indexed 2024-12-19T07:34:15Z
format Article
id doaj.art-1b9e65b6f6a740bebb8490f361ac8540
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-19T07:34:15Z
publishDate 2020-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-1b9e65b6f6a740bebb8490f361ac85402022-12-21T20:30:37ZengIEEEIEEE Access2169-35362020-01-018709257093710.1109/ACCESS.2020.29871059063491Feature-Guided SAR-to-Optical Image TranslationJiexin Zhang0https://orcid.org/0000-0001-6945-8827Jianjiang Zhou1Xiwen Lu2Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, ChinaKey Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, ChinaKey Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, ChinaThe powerful performance of Generative Adversarial Networks (GANs) in image-to-image translation has been well demonstrated in recent years. However, most methods are focused on completing an isolated image translation task. With the complex scenes in optical images and high-frequency speckle noise in SAR images, the quality of generated images is often unsatisfactory. In this paper, a feature-guided method for SAR-to-optical image translation is proposed to better take the unique attributes of images into account. Specifically, in view of the diversity of structure features and texture features, VGG-19 network is used as the feature extractor in the task of cross-modal image translation. To ensure the acquisition of multilayer features in the process of image generation, feature matching is carried out on different layers. Loss function based on Discrete Cosine Transform is designed to filter out the high-frequency noise. The generated images show better performance in feature preservation and noise reduction, and achieve higher Image Quality Assessment scores compared with images generated by some famous methods. The superiority of our algorithm is also demonstrated by being applied to different networks.https://ieeexplore.ieee.org/document/9063491/SAR-to-optical image translationfeature extractionhigh-frequency noisegenerative adversarial networks
spellingShingle Jiexin Zhang
Jianjiang Zhou
Xiwen Lu
Feature-Guided SAR-to-Optical Image Translation
IEEE Access
SAR-to-optical image translation
feature extraction
high-frequency noise
generative adversarial networks
title Feature-Guided SAR-to-Optical Image Translation
title_full Feature-Guided SAR-to-Optical Image Translation
title_fullStr Feature-Guided SAR-to-Optical Image Translation
title_full_unstemmed Feature-Guided SAR-to-Optical Image Translation
title_short Feature-Guided SAR-to-Optical Image Translation
title_sort feature guided sar to optical image translation
topic SAR-to-optical image translation
feature extraction
high-frequency noise
generative adversarial networks
url https://ieeexplore.ieee.org/document/9063491/
work_keys_str_mv AT jiexinzhang featureguidedsartoopticalimagetranslation
AT jianjiangzhou featureguidedsartoopticalimagetranslation
AT xiwenlu featureguidedsartoopticalimagetranslation