DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks

As an essential branch of artificial intelligence, recommendation systems have gradually penetrated people’s daily lives. It is the active recommendation of goods or services of potential interest to users based on their preferences. Many recommendation methods have been proposed in both industry an...

Full description

Bibliographic Details
Main Authors: Tianhan Gao, Shen Gao, Jun Xu, Qihui Zhao
Format: Article
Language:English
Published: MDPI AG 2023-02-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/13/4/2555
_version_ 1797622492361129984
author Tianhan Gao
Shen Gao
Jun Xu
Qihui Zhao
author_facet Tianhan Gao
Shen Gao
Jun Xu
Qihui Zhao
author_sort Tianhan Gao
collection DOAJ
description As an essential branch of artificial intelligence, recommendation systems have gradually penetrated people’s daily lives. It is the active recommendation of goods or services of potential interest to users based on their preferences. Many recommendation methods have been proposed in both industry and academia. However, there are some limitations of previous recommendation methods: (1) Most of them do not consider the cross-correlation between data. (2) Many treat the recommendation process as a one-time act and do not consider the continuity of the recommendation system. To overcome these limitations, we propose a recommendation framework based on deep reinforcement learning techniques, known as DDRCN: a deep deterministic policy gradient recommendation framework incorporating deep cross networks. We use a Deep network and a Cross network to fit the cross relationships between the data, to obtain a representation of the user interaction data. The Actor-Critic network is designed to simulate the continuous interaction behavior of users through a greedy strategy. A deep deterministic policy gradient network is also used to train the recommendation model. Finally, we conduct experiments with two publicly available datasets and find that our proposed recommendation framework outperforms the baseline approach in the recall and ranking phases of recommendations.
first_indexed 2024-03-11T09:12:03Z
format Article
id doaj.art-72104993feac444881e9faf3300c6a88
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-03-11T09:12:03Z
publishDate 2023-02-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-72104993feac444881e9faf3300c6a882023-11-16T18:57:27ZengMDPI AGApplied Sciences2076-34172023-02-01134255510.3390/app13042555DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross NetworksTianhan Gao0Shen Gao1Jun Xu2Qihui Zhao3Software College, Northeastern University, Shenyang 110169, ChinaSoftware College, Northeastern University, Shenyang 110169, ChinaScience and Technology on Special System Simulation Laboratory, Beijing Simulation Center, Beijing 100854, ChinaSoftware College, Northeastern University, Shenyang 110169, ChinaAs an essential branch of artificial intelligence, recommendation systems have gradually penetrated people’s daily lives. It is the active recommendation of goods or services of potential interest to users based on their preferences. Many recommendation methods have been proposed in both industry and academia. However, there are some limitations of previous recommendation methods: (1) Most of them do not consider the cross-correlation between data. (2) Many treat the recommendation process as a one-time act and do not consider the continuity of the recommendation system. To overcome these limitations, we propose a recommendation framework based on deep reinforcement learning techniques, known as DDRCN: a deep deterministic policy gradient recommendation framework incorporating deep cross networks. We use a Deep network and a Cross network to fit the cross relationships between the data, to obtain a representation of the user interaction data. The Actor-Critic network is designed to simulate the continuous interaction behavior of users through a greedy strategy. A deep deterministic policy gradient network is also used to train the recommendation model. Finally, we conduct experiments with two publicly available datasets and find that our proposed recommendation framework outperforms the baseline approach in the recall and ranking phases of recommendations.https://www.mdpi.com/2076-3417/13/4/2555recommendation systemdeep deterministic policy gradientdeep cross networkreinforcement learning
spellingShingle Tianhan Gao
Shen Gao
Jun Xu
Qihui Zhao
DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks
Applied Sciences
recommendation system
deep deterministic policy gradient
deep cross network
reinforcement learning
title DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks
title_full DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks
title_fullStr DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks
title_full_unstemmed DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks
title_short DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks
title_sort ddrcn deep deterministic policy gradient recommendation framework fused with deep cross networks
topic recommendation system
deep deterministic policy gradient
deep cross network
reinforcement learning
url https://www.mdpi.com/2076-3417/13/4/2555
work_keys_str_mv AT tianhangao ddrcndeepdeterministicpolicygradientrecommendationframeworkfusedwithdeepcrossnetworks
AT shengao ddrcndeepdeterministicpolicygradientrecommendationframeworkfusedwithdeepcrossnetworks
AT junxu ddrcndeepdeterministicpolicygradientrecommendationframeworkfusedwithdeepcrossnetworks
AT qihuizhao ddrcndeepdeterministicpolicygradientrecommendationframeworkfusedwithdeepcrossnetworks