Remote Sensing Cross-Modal Retrieval by Deep Image-Voice Hashing

Remote sensing image retrieval aims at searching remote sensing images of interest among immense volumes of remote sensing data, which is an enormous challenge. Direct use of voice for human&#x2013;computer interaction is more convenient and intelligent. In this article, a <italic>deep ima...

Full description

Bibliographic Details
Main Authors: Yichao Zhang, Xiangtao Zheng, Xiaoqiang Lu
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9928415/
_version_ 1797984826350895104
author Yichao Zhang
Xiangtao Zheng
Xiaoqiang Lu
author_facet Yichao Zhang
Xiangtao Zheng
Xiaoqiang Lu
author_sort Yichao Zhang
collection DOAJ
description Remote sensing image retrieval aims at searching remote sensing images of interest among immense volumes of remote sensing data, which is an enormous challenge. Direct use of voice for human&#x2013;computer interaction is more convenient and intelligent. In this article, a <italic>deep image-voice hashing</italic> (DIVH) method is proposed for remote sensing image-voice retrieval. First, the whole framework is composed of the image and the voice feature learning subnetwork. Then, the hash code learning procedure will be leveraged in remote sensing image-voice retrieval to further improve the retrieval efficiency and reduce memory footprint. Hash code learning maps the deep features of images and voices into a common Hamming space. Finally, image-voice pairwise loss is proposed, which considers the similarity preservation and balance of hash codes. The similarity preserving and the balance controlling term of the loss function improve the similarity preservation from original data space to the Hamming space and the discrimination of binary code, respectively. This unified cross-modal feature and hash code learning framework significantly reduce the semantic gap between the two modal data. Experiments demonstrate that the proposed DIVH method can achieve a superior retrieval performance than other state-of-the-art remote sensing image-voice retrieval methods.
first_indexed 2024-04-11T07:07:50Z
format Article
id doaj.art-74f8638329764c55bfdb1909b27a7fbe
institution Directory Open Access Journal
issn 2151-1535
language English
last_indexed 2024-04-11T07:07:50Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
spelling doaj.art-74f8638329764c55bfdb1909b27a7fbe2022-12-22T04:38:17ZengIEEEIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing2151-15352022-01-01159327933810.1109/JSTARS.2022.32163339928415Remote Sensing Cross-Modal Retrieval by Deep Image-Voice HashingYichao Zhang0Xiangtao Zheng1https://orcid.org/0000-0002-8398-6324Xiaoqiang Lu2https://orcid.org/0000-0002-7037-5188Key Laboratory of Spectral Imaging Technology, Xi&#x0027;an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi&#x0027;an, ChinaKey Laboratory of Spectral Imaging Technology, Xi&#x0027;an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi&#x0027;an, ChinaKey Laboratory of Spectral Imaging Technology, Xi&#x0027;an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi&#x0027;an, ChinaRemote sensing image retrieval aims at searching remote sensing images of interest among immense volumes of remote sensing data, which is an enormous challenge. Direct use of voice for human&#x2013;computer interaction is more convenient and intelligent. In this article, a <italic>deep image-voice hashing</italic> (DIVH) method is proposed for remote sensing image-voice retrieval. First, the whole framework is composed of the image and the voice feature learning subnetwork. Then, the hash code learning procedure will be leveraged in remote sensing image-voice retrieval to further improve the retrieval efficiency and reduce memory footprint. Hash code learning maps the deep features of images and voices into a common Hamming space. Finally, image-voice pairwise loss is proposed, which considers the similarity preservation and balance of hash codes. The similarity preserving and the balance controlling term of the loss function improve the similarity preservation from original data space to the Hamming space and the discrimination of binary code, respectively. This unified cross-modal feature and hash code learning framework significantly reduce the semantic gap between the two modal data. Experiments demonstrate that the proposed DIVH method can achieve a superior retrieval performance than other state-of-the-art remote sensing image-voice retrieval methods.https://ieeexplore.ieee.org/document/9928415/Convolutional neural network (CNN)cross-modal retrievaldeep hashinghash code
spellingShingle Yichao Zhang
Xiangtao Zheng
Xiaoqiang Lu
Remote Sensing Cross-Modal Retrieval by Deep Image-Voice Hashing
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Convolutional neural network (CNN)
cross-modal retrieval
deep hashing
hash code
title Remote Sensing Cross-Modal Retrieval by Deep Image-Voice Hashing
title_full Remote Sensing Cross-Modal Retrieval by Deep Image-Voice Hashing
title_fullStr Remote Sensing Cross-Modal Retrieval by Deep Image-Voice Hashing
title_full_unstemmed Remote Sensing Cross-Modal Retrieval by Deep Image-Voice Hashing
title_short Remote Sensing Cross-Modal Retrieval by Deep Image-Voice Hashing
title_sort remote sensing cross modal retrieval by deep image voice hashing
topic Convolutional neural network (CNN)
cross-modal retrieval
deep hashing
hash code
url https://ieeexplore.ieee.org/document/9928415/
work_keys_str_mv AT yichaozhang remotesensingcrossmodalretrievalbydeepimagevoicehashing
AT xiangtaozheng remotesensingcrossmodalretrievalbydeepimagevoicehashing
AT xiaoqianglu remotesensingcrossmodalretrievalbydeepimagevoicehashing