Deep Multi-Semantic Fusion-Based Cross-Modal Hashing
Due to the low costs of its storage and search, the cross-modal retrieval hashing method has received much research interest in the big data era. Due to the application of deep learning, the cross-modal representation capabilities have risen markedly. However, the existing deep hashing methods canno...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-01-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/10/3/430 |
_version_ | 1827660034980446208 |
---|---|
author | Xinghui Zhu Liewu Cai Zhuoyang Zou Lei Zhu |
author_facet | Xinghui Zhu Liewu Cai Zhuoyang Zou Lei Zhu |
author_sort | Xinghui Zhu |
collection | DOAJ |
description | Due to the low costs of its storage and search, the cross-modal retrieval hashing method has received much research interest in the big data era. Due to the application of deep learning, the cross-modal representation capabilities have risen markedly. However, the existing deep hashing methods cannot consider multi-label semantic learning and cross-modal similarity learning simultaneously. That means potential semantic correlations among multimedia data are not fully excavated from multi-category labels, which also affects the original similarity preserving of cross-modal hash codes. To this end, this paper proposes deep multi-semantic fusion-based cross-modal hashing (DMSFH), which uses two deep neural networks to extract cross-modal features, and uses a multi-label semantic fusion method to improve cross-modal consistent semantic discrimination learning. Moreover, a graph regularization method is combined with inter-modal and intra-modal pairwise loss to preserve the nearest neighbor relationship between data in Hamming subspace. Thus, DMSFH not only retains semantic similarity between multi-modal data, but integrates multi-label information into modal learning as well. Extensive experimental results on two commonly used benchmark datasets show that our DMSFH is competitive with the state-of-the-art methods. |
first_indexed | 2024-03-09T23:32:34Z |
format | Article |
id | doaj.art-d316c86e8a6c45b181b122f6d5126374 |
institution | Directory Open Access Journal |
issn | 2227-7390 |
language | English |
last_indexed | 2024-03-09T23:32:34Z |
publishDate | 2022-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Mathematics |
spelling | doaj.art-d316c86e8a6c45b181b122f6d51263742023-11-23T17:07:23ZengMDPI AGMathematics2227-73902022-01-0110343010.3390/math10030430Deep Multi-Semantic Fusion-Based Cross-Modal HashingXinghui Zhu0Liewu Cai1Zhuoyang Zou2Lei Zhu3College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, ChinaCollege of Information and Intelligence, Hunan Agricultural University, Changsha 410128, ChinaCollege of Information and Intelligence, Hunan Agricultural University, Changsha 410128, ChinaCollege of Information and Intelligence, Hunan Agricultural University, Changsha 410128, ChinaDue to the low costs of its storage and search, the cross-modal retrieval hashing method has received much research interest in the big data era. Due to the application of deep learning, the cross-modal representation capabilities have risen markedly. However, the existing deep hashing methods cannot consider multi-label semantic learning and cross-modal similarity learning simultaneously. That means potential semantic correlations among multimedia data are not fully excavated from multi-category labels, which also affects the original similarity preserving of cross-modal hash codes. To this end, this paper proposes deep multi-semantic fusion-based cross-modal hashing (DMSFH), which uses two deep neural networks to extract cross-modal features, and uses a multi-label semantic fusion method to improve cross-modal consistent semantic discrimination learning. Moreover, a graph regularization method is combined with inter-modal and intra-modal pairwise loss to preserve the nearest neighbor relationship between data in Hamming subspace. Thus, DMSFH not only retains semantic similarity between multi-modal data, but integrates multi-label information into modal learning as well. Extensive experimental results on two commonly used benchmark datasets show that our DMSFH is competitive with the state-of-the-art methods.https://www.mdpi.com/2227-7390/10/3/430cross-modal hashingsemantic label informationmulti-label semantic fusiongraph regularizationdeep neural network |
spellingShingle | Xinghui Zhu Liewu Cai Zhuoyang Zou Lei Zhu Deep Multi-Semantic Fusion-Based Cross-Modal Hashing Mathematics cross-modal hashing semantic label information multi-label semantic fusion graph regularization deep neural network |
title | Deep Multi-Semantic Fusion-Based Cross-Modal Hashing |
title_full | Deep Multi-Semantic Fusion-Based Cross-Modal Hashing |
title_fullStr | Deep Multi-Semantic Fusion-Based Cross-Modal Hashing |
title_full_unstemmed | Deep Multi-Semantic Fusion-Based Cross-Modal Hashing |
title_short | Deep Multi-Semantic Fusion-Based Cross-Modal Hashing |
title_sort | deep multi semantic fusion based cross modal hashing |
topic | cross-modal hashing semantic label information multi-label semantic fusion graph regularization deep neural network |
url | https://www.mdpi.com/2227-7390/10/3/430 |
work_keys_str_mv | AT xinghuizhu deepmultisemanticfusionbasedcrossmodalhashing AT liewucai deepmultisemanticfusionbasedcrossmodalhashing AT zhuoyangzou deepmultisemanticfusionbasedcrossmodalhashing AT leizhu deepmultisemanticfusionbasedcrossmodalhashing |