Fine-grained similarity semantic preserving deep hashing for cross-modal retrieval
Cross-modal hashing methods have received wide attention in cross-modal retrieval owing to their advantages in computational efficiency and storage cost. However, most existing deep cross-modal hashing methods cannot employ both intra-modal and inter-modal similarities to guide the learning of hash...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-04-01
|
Series: | Frontiers in Physics |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fphy.2023.1194573/full |
_version_ | 1797837864509112320 |
---|---|
author | Guoyou Li Qingjun Peng Dexu Zou Jinyue Yang Zhenqiu Shu |
author_facet | Guoyou Li Qingjun Peng Dexu Zou Jinyue Yang Zhenqiu Shu |
author_sort | Guoyou Li |
collection | DOAJ |
description | Cross-modal hashing methods have received wide attention in cross-modal retrieval owing to their advantages in computational efficiency and storage cost. However, most existing deep cross-modal hashing methods cannot employ both intra-modal and inter-modal similarities to guide the learning of hash codes and ignore the quantization loss of hash codes, simultaneously. To solve the above problems, we propose a fine-grained similarity semantic preserving deep hashing (FSSPDH) for cross-modal retrieval. Firstly, this proposed method learns different hash codes for different modalities to preserve the intrinsic property of each modality. Secondly, the fine-grained similarity matrix is constructed by using labels and data features, which not only maintains the similarity between and within modalities. In addition, quantization loss is used to learn hash codes and thus effectively reduce information loss caused during the quantization procedure. A large number of experiments on three public datasets demonstrate the advantage of the proposed FSSPDH method. |
first_indexed | 2024-04-09T15:32:37Z |
format | Article |
id | doaj.art-bbd3bd0b45204ec596ff0a7a15fb3f88 |
institution | Directory Open Access Journal |
issn | 2296-424X |
language | English |
last_indexed | 2024-04-09T15:32:37Z |
publishDate | 2023-04-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Physics |
spelling | doaj.art-bbd3bd0b45204ec596ff0a7a15fb3f882023-04-28T05:41:47ZengFrontiers Media S.A.Frontiers in Physics2296-424X2023-04-011110.3389/fphy.2023.11945731194573Fine-grained similarity semantic preserving deep hashing for cross-modal retrievalGuoyou Li0Qingjun Peng1Dexu Zou2Jinyue Yang3Zhenqiu Shu4Yunnan Power Grid Corporation, Kunming, ChinaElectric Power Research Institute, Yunnan Power Grid Corporation, Kunming, ChinaElectric Power Research Institute, Yunnan Power Grid Corporation, Kunming, ChinaElectric Power Research Institute, Yunnan Power Grid Corporation, Kunming, ChinaFaculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, ChinaCross-modal hashing methods have received wide attention in cross-modal retrieval owing to their advantages in computational efficiency and storage cost. However, most existing deep cross-modal hashing methods cannot employ both intra-modal and inter-modal similarities to guide the learning of hash codes and ignore the quantization loss of hash codes, simultaneously. To solve the above problems, we propose a fine-grained similarity semantic preserving deep hashing (FSSPDH) for cross-modal retrieval. Firstly, this proposed method learns different hash codes for different modalities to preserve the intrinsic property of each modality. Secondly, the fine-grained similarity matrix is constructed by using labels and data features, which not only maintains the similarity between and within modalities. In addition, quantization loss is used to learn hash codes and thus effectively reduce information loss caused during the quantization procedure. A large number of experiments on three public datasets demonstrate the advantage of the proposed FSSPDH method.https://www.frontiersin.org/articles/10.3389/fphy.2023.1194573/fullcross-modal fusionsimilarity semantic preservingquantization lossdeep hashingintra-modal similarityinter-modal similarity |
spellingShingle | Guoyou Li Qingjun Peng Dexu Zou Jinyue Yang Zhenqiu Shu Fine-grained similarity semantic preserving deep hashing for cross-modal retrieval Frontiers in Physics cross-modal fusion similarity semantic preserving quantization loss deep hashing intra-modal similarity inter-modal similarity |
title | Fine-grained similarity semantic preserving deep hashing for cross-modal retrieval |
title_full | Fine-grained similarity semantic preserving deep hashing for cross-modal retrieval |
title_fullStr | Fine-grained similarity semantic preserving deep hashing for cross-modal retrieval |
title_full_unstemmed | Fine-grained similarity semantic preserving deep hashing for cross-modal retrieval |
title_short | Fine-grained similarity semantic preserving deep hashing for cross-modal retrieval |
title_sort | fine grained similarity semantic preserving deep hashing for cross modal retrieval |
topic | cross-modal fusion similarity semantic preserving quantization loss deep hashing intra-modal similarity inter-modal similarity |
url | https://www.frontiersin.org/articles/10.3389/fphy.2023.1194573/full |
work_keys_str_mv | AT guoyouli finegrainedsimilaritysemanticpreservingdeephashingforcrossmodalretrieval AT qingjunpeng finegrainedsimilaritysemanticpreservingdeephashingforcrossmodalretrieval AT dexuzou finegrainedsimilaritysemanticpreservingdeephashingforcrossmodalretrieval AT jinyueyang finegrainedsimilaritysemanticpreservingdeephashingforcrossmodalretrieval AT zhenqiushu finegrainedsimilaritysemanticpreservingdeephashingforcrossmodalretrieval |