CMBF: Cross-Modal-Based Fusion Recommendation Algorithm

A recommendation system is often used to recommend items that may be of interest to users. One of the main challenges is that the scarcity of actual interaction data between users and items restricts the performance of recommendation systems. To solve this problem, multi-modal technologies have been...

Full description

Bibliographic Details
Main Authors: Xi Chen, Yangsiyi Lu, Yuehai Wang, Jianyi Yang
Format: Article
Language:English
Published: MDPI AG 2021-08-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/16/5275
_version_ 1797522190625669120
author Xi Chen
Yangsiyi Lu
Yuehai Wang
Jianyi Yang
author_facet Xi Chen
Yangsiyi Lu
Yuehai Wang
Jianyi Yang
author_sort Xi Chen
collection DOAJ
description A recommendation system is often used to recommend items that may be of interest to users. One of the main challenges is that the scarcity of actual interaction data between users and items restricts the performance of recommendation systems. To solve this problem, multi-modal technologies have been used for expanding available information. However, the existing multi-modal recommendation algorithms all extract the feature of single modality and simply splice the features of different modalities to predict the recommendation results. This fusion method can not completely mine the relevance of multi-modal features and lose the relationship between different modalities, which affects the prediction results. In this paper, we propose a Cross-Modal-Based Fusion Recommendation Algorithm (CMBF) that can capture both the single-modal features and the cross-modal features. Our algorithm uses a novel cross-modal fusion method to fuse the multi-modal features completely and learn the cross information between different modalities. We evaluate our algorithm on two datasets, MovieLens and Amazon. Experiments show that our method has achieved the best performance compared to other recommendation algorithms. We also design ablation study to prove that our cross-modal fusion method improves the prediction results.
first_indexed 2024-03-10T08:24:58Z
format Article
id doaj.art-ff03e4349e6a4aa8af02ad5d9849bcd5
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T08:24:58Z
publishDate 2021-08-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-ff03e4349e6a4aa8af02ad5d9849bcd52023-11-22T09:36:52ZengMDPI AGSensors1424-82202021-08-012116527510.3390/s21165275CMBF: Cross-Modal-Based Fusion Recommendation AlgorithmXi Chen0Yangsiyi Lu1Yuehai Wang2Jianyi Yang3College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310063, ChinaA recommendation system is often used to recommend items that may be of interest to users. One of the main challenges is that the scarcity of actual interaction data between users and items restricts the performance of recommendation systems. To solve this problem, multi-modal technologies have been used for expanding available information. However, the existing multi-modal recommendation algorithms all extract the feature of single modality and simply splice the features of different modalities to predict the recommendation results. This fusion method can not completely mine the relevance of multi-modal features and lose the relationship between different modalities, which affects the prediction results. In this paper, we propose a Cross-Modal-Based Fusion Recommendation Algorithm (CMBF) that can capture both the single-modal features and the cross-modal features. Our algorithm uses a novel cross-modal fusion method to fuse the multi-modal features completely and learn the cross information between different modalities. We evaluate our algorithm on two datasets, MovieLens and Amazon. Experiments show that our method has achieved the best performance compared to other recommendation algorithms. We also design ablation study to prove that our cross-modal fusion method improves the prediction results.https://www.mdpi.com/1424-8220/21/16/5275recommendation systemsmulti-modal algorithmcross-modal fusionattention mechanism
spellingShingle Xi Chen
Yangsiyi Lu
Yuehai Wang
Jianyi Yang
CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
Sensors
recommendation systems
multi-modal algorithm
cross-modal fusion
attention mechanism
title CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_full CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_fullStr CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_full_unstemmed CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_short CMBF: Cross-Modal-Based Fusion Recommendation Algorithm
title_sort cmbf cross modal based fusion recommendation algorithm
topic recommendation systems
multi-modal algorithm
cross-modal fusion
attention mechanism
url https://www.mdpi.com/1424-8220/21/16/5275
work_keys_str_mv AT xichen cmbfcrossmodalbasedfusionrecommendationalgorithm
AT yangsiyilu cmbfcrossmodalbasedfusionrecommendationalgorithm
AT yuehaiwang cmbfcrossmodalbasedfusionrecommendationalgorithm
AT jianyiyang cmbfcrossmodalbasedfusionrecommendationalgorithm