Alleviating the inconsistency of multimodal data in cross-modal retrieval
With the explosive growth of multimodal Internet data, cross-modal hashing retrieval has become crucial for semantically searching instances across different modalities. However, existing cross-modal retrieval methods rely on assumptions of perfect consistency between modalities and between modaliti...
Main Authors: | Li, Tieying, Yang, Xiaochun, Ke, Yiping, Wang, Bin, Liu, Yinan, Xu, Jiaxing |
---|---|
Other Authors: | College of Computing and Data Science |
Format: | Conference Paper |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180605 |
Similar Items
-
Learning a cross-modal hashing network for multimedia search
by: Tan, Yap Peng, et al.
Published: (2018) -
Is a high tone pointy? Speakers of different languages match Mandarin Chinese tones to visual shapes differently
by: Shang, Nan, et al.
Published: (2018) -
When does maluma/takete fail? two key failures and a meta-analysis suggest that phonology and phonotactics matter
by: Styles, Suzy J., et al.
Published: (2019) -
Implicit Association Test (IAT) studies investigating pitch-shape audiovisual cross-modal associations across language groups
by: Shang, Nan, et al.
Published: (2023) -
Learning language to symbol and language to vision mapping for visual grounding
by: He, Su, et al.
Published: (2022)