Disentangled Sequential Variational Autoencoder for Collaborative Filtering

Recommendation models typically use user’s historical behaviors to obtain user preference representations for recommendations.Most of the methods of learning user representations always entangle different preference factors,while the disentangled learning method can be used to decompose user behavio...

Full description

Bibliographic Details
Main Author: WU Mei-lin, HUANG Jia-jin, QIN Jin
Format: Article
Language:zho
Published: Editorial office of Computer Science 2022-12-01
Series:Jisuanji kexue
Subjects:
Online Access:https://www.jsjkx.com/fileup/1002-137X/PDF/1002-137X-2022-49-12-163.pdf
_version_ 1797845167195029504
author WU Mei-lin, HUANG Jia-jin, QIN Jin
author_facet WU Mei-lin, HUANG Jia-jin, QIN Jin
author_sort WU Mei-lin, HUANG Jia-jin, QIN Jin
collection DOAJ
description Recommendation models typically use user’s historical behaviors to obtain user preference representations for recommendations.Most of the methods of learning user representations always entangle different preference factors,while the disentangled learning method can be used to decompose user behavior characteristics.In this paper,a variational autoencoder based framework DSVAECF is proposed to disentangle the static and dynamic factors from user’s historical behaviors.Firstly,two encoders of the model use multi-layer perceptron and recurrent neural network to model the user behavior history respectively,so as to obtain the static and dynamic preference representation of the user.Then,the concatenate static and dynamic preference representations are treated as disentangled representation input decoders to capture user’s decisions and reconstruct user’s behavior.On the one hand,in the model training phase,DSVAECF learns model parameters by maximizes the mutual information between reconstructed user’s behaviors and actual user’s behaviors.On the other hand,DSVAECF minimizes the difference between disentangled representations and their prior distribution to retain the generation ability of the model.Experimental results on Amazon and MovieLens data sets show that,compared with the baselines,DSVAECF significantly improves the normalized discounted cumulative gain,recall,and precision,and has better recommendation performance.
first_indexed 2024-04-09T17:34:09Z
format Article
id doaj.art-ed60f63c7d4346188bf7860b01505092
institution Directory Open Access Journal
issn 1002-137X
language zho
last_indexed 2024-04-09T17:34:09Z
publishDate 2022-12-01
publisher Editorial office of Computer Science
record_format Article
series Jisuanji kexue
spelling doaj.art-ed60f63c7d4346188bf7860b015050922023-04-18T02:32:59ZzhoEditorial office of Computer ScienceJisuanji kexue1002-137X2022-12-01491216316910.11896/jsjkx.211200080Disentangled Sequential Variational Autoencoder for Collaborative FilteringWU Mei-lin, HUANG Jia-jin, QIN Jin01 School of Computer Science and Technology,Guizhou University,Guiyang 550025,China ;2 International WIC Institute,Beijing University of Technology,Beijing 100000,ChinaRecommendation models typically use user’s historical behaviors to obtain user preference representations for recommendations.Most of the methods of learning user representations always entangle different preference factors,while the disentangled learning method can be used to decompose user behavior characteristics.In this paper,a variational autoencoder based framework DSVAECF is proposed to disentangle the static and dynamic factors from user’s historical behaviors.Firstly,two encoders of the model use multi-layer perceptron and recurrent neural network to model the user behavior history respectively,so as to obtain the static and dynamic preference representation of the user.Then,the concatenate static and dynamic preference representations are treated as disentangled representation input decoders to capture user’s decisions and reconstruct user’s behavior.On the one hand,in the model training phase,DSVAECF learns model parameters by maximizes the mutual information between reconstructed user’s behaviors and actual user’s behaviors.On the other hand,DSVAECF minimizes the difference between disentangled representations and their prior distribution to retain the generation ability of the model.Experimental results on Amazon and MovieLens data sets show that,compared with the baselines,DSVAECF significantly improves the normalized discounted cumulative gain,recall,and precision,and has better recommendation performance.https://www.jsjkx.com/fileup/1002-137X/PDF/1002-137X-2022-49-12-163.pdfvariational autoencoder|deep learning|sequence modeling|disentangled learning|collaborative filtering
spellingShingle WU Mei-lin, HUANG Jia-jin, QIN Jin
Disentangled Sequential Variational Autoencoder for Collaborative Filtering
Jisuanji kexue
variational autoencoder|deep learning|sequence modeling|disentangled learning|collaborative filtering
title Disentangled Sequential Variational Autoencoder for Collaborative Filtering
title_full Disentangled Sequential Variational Autoencoder for Collaborative Filtering
title_fullStr Disentangled Sequential Variational Autoencoder for Collaborative Filtering
title_full_unstemmed Disentangled Sequential Variational Autoencoder for Collaborative Filtering
title_short Disentangled Sequential Variational Autoencoder for Collaborative Filtering
title_sort disentangled sequential variational autoencoder for collaborative filtering
topic variational autoencoder|deep learning|sequence modeling|disentangled learning|collaborative filtering
url https://www.jsjkx.com/fileup/1002-137X/PDF/1002-137X-2022-49-12-163.pdf
work_keys_str_mv AT wumeilinhuangjiajinqinjin disentangledsequentialvariationalautoencoderforcollaborativefiltering