KPRLN: deep knowledge preference-aware reinforcement learning network for recommendation
Abstract User preference information plays an important role in knowledge graph-based recommender systems, which is reflected in users having different preferences for each entity–relation pair in the knowledge graph. Existing approaches have not modeled this fine-grained user preference feature wel...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2023-05-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-023-01083-7 |
_version_ | 1797647103793561600 |
---|---|
author | Di Wu Mingjing Tang Shu Zhang Ao You Wei Gao |
author_facet | Di Wu Mingjing Tang Shu Zhang Ao You Wei Gao |
author_sort | Di Wu |
collection | DOAJ |
description | Abstract User preference information plays an important role in knowledge graph-based recommender systems, which is reflected in users having different preferences for each entity–relation pair in the knowledge graph. Existing approaches have not modeled this fine-grained user preference feature well, as affecting the performance of recommender systems. In this paper, we propose a deep knowledge preference-aware reinforcement learning network (KPRLN) for the recommendation, which builds paths between user’s historical interaction items in the knowledge graph, learns the preference features of each user–entity–relation and generates the weighted knowledge graph with fine-grained preference features. First, we proposed a hierarchical propagation path construction method to address the problems of the pendant entity and long path exploration in the knowledge graph. The method expands outward to form clusters centered on items and uses them to represent the starting and target states in reinforcement learning. With the iteration of clusters, we can better learn the pendant entity preference and explore farther paths. Besides, we design an attention graph convolutional network, which focuses on more influential entity–relation pairs, to aggregate user and item higher order representations that contain fine-grained preference features. Finally, extensive experiments on two real-world datasets demonstrate that our method outperforms other state-of-the-art baselines. |
first_indexed | 2024-03-11T15:12:31Z |
format | Article |
id | doaj.art-290a12c6fa4e414e8668c233756b1adb |
institution | Directory Open Access Journal |
issn | 2199-4536 2198-6053 |
language | English |
last_indexed | 2024-03-11T15:12:31Z |
publishDate | 2023-05-01 |
publisher | Springer |
record_format | Article |
series | Complex & Intelligent Systems |
spelling | doaj.art-290a12c6fa4e414e8668c233756b1adb2023-10-29T12:41:09ZengSpringerComplex & Intelligent Systems2199-45362198-60532023-05-01966645665910.1007/s40747-023-01083-7KPRLN: deep knowledge preference-aware reinforcement learning network for recommendationDi Wu0Mingjing Tang1Shu Zhang2Ao You3Wei Gao4School of Information Science and Technology, Yunnan Normal UniversityKey Laboratory of Educational Informatization for Nationalities Ministry of Education, Yunnan Normal UniversitySchool of Information Science and Technology, Yunnan Normal UniversitySchool of Information Science and Technology, Yunnan Normal UniversitySchool of Information Science and Technology, Yunnan Normal UniversityAbstract User preference information plays an important role in knowledge graph-based recommender systems, which is reflected in users having different preferences for each entity–relation pair in the knowledge graph. Existing approaches have not modeled this fine-grained user preference feature well, as affecting the performance of recommender systems. In this paper, we propose a deep knowledge preference-aware reinforcement learning network (KPRLN) for the recommendation, which builds paths between user’s historical interaction items in the knowledge graph, learns the preference features of each user–entity–relation and generates the weighted knowledge graph with fine-grained preference features. First, we proposed a hierarchical propagation path construction method to address the problems of the pendant entity and long path exploration in the knowledge graph. The method expands outward to form clusters centered on items and uses them to represent the starting and target states in reinforcement learning. With the iteration of clusters, we can better learn the pendant entity preference and explore farther paths. Besides, we design an attention graph convolutional network, which focuses on more influential entity–relation pairs, to aggregate user and item higher order representations that contain fine-grained preference features. Finally, extensive experiments on two real-world datasets demonstrate that our method outperforms other state-of-the-art baselines.https://doi.org/10.1007/s40747-023-01083-7Knowledge graphRecommender systemDeep reinforcement learningGraph neural network |
spellingShingle | Di Wu Mingjing Tang Shu Zhang Ao You Wei Gao KPRLN: deep knowledge preference-aware reinforcement learning network for recommendation Complex & Intelligent Systems Knowledge graph Recommender system Deep reinforcement learning Graph neural network |
title | KPRLN: deep knowledge preference-aware reinforcement learning network for recommendation |
title_full | KPRLN: deep knowledge preference-aware reinforcement learning network for recommendation |
title_fullStr | KPRLN: deep knowledge preference-aware reinforcement learning network for recommendation |
title_full_unstemmed | KPRLN: deep knowledge preference-aware reinforcement learning network for recommendation |
title_short | KPRLN: deep knowledge preference-aware reinforcement learning network for recommendation |
title_sort | kprln deep knowledge preference aware reinforcement learning network for recommendation |
topic | Knowledge graph Recommender system Deep reinforcement learning Graph neural network |
url | https://doi.org/10.1007/s40747-023-01083-7 |
work_keys_str_mv | AT diwu kprlndeepknowledgepreferenceawarereinforcementlearningnetworkforrecommendation AT mingjingtang kprlndeepknowledgepreferenceawarereinforcementlearningnetworkforrecommendation AT shuzhang kprlndeepknowledgepreferenceawarereinforcementlearningnetworkforrecommendation AT aoyou kprlndeepknowledgepreferenceawarereinforcementlearningnetworkforrecommendation AT weigao kprlndeepknowledgepreferenceawarereinforcementlearningnetworkforrecommendation |