Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations

Recommender systems utilizing explicit feedback have witnessed significant advancements and widespread applications over the past years. However, generating recommendations in few-shot scenarios remains a persistent challenge. Recently, large language models (LLMs) have emerged as a promising soluti...

Full description

Bibliographic Details
Main Author: Zhoumeng Wang
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10440582/
_version_ 1797289808257613824
author Zhoumeng Wang
author_facet Zhoumeng Wang
author_sort Zhoumeng Wang
collection DOAJ
description Recommender systems utilizing explicit feedback have witnessed significant advancements and widespread applications over the past years. However, generating recommendations in few-shot scenarios remains a persistent challenge. Recently, large language models (LLMs) have emerged as a promising solution for addressing natural language processing (NLP) tasks, thereby offering novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems. To bridge recommender systems and LLMs, we devise a prompting template that generates user and item representations based on explicit feedback. Subsequently, we integrate these LLM-processed representations into various recommendation models to evaluate their significance across diverse recommendation tasks. Our ablation experiments and case study analysis collectively demonstrate the effectiveness of LLMs in processing explicit feedback, highlighting that LLMs equipped with generative and logical reasoning capabilities can effectively serve as a component of recommender systems to enhance their performance in few-shot scenarios. Furthermore, the broad adaptability of LLMs augments the generalization potential of recommender models, despite certain inherent constraints. We anticipate that our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs’ involvement in recommender systems and contribute to the advancement of the explicit feedback-based recommender systems field.
first_indexed 2024-03-07T19:11:22Z
format Article
id doaj.art-7a27d6d1e30b4eeeaf7281503d30869c
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-07T19:11:22Z
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-7a27d6d1e30b4eeeaf7281503d30869c2024-03-01T00:00:54ZengIEEEIEEE Access2169-35362024-01-0112291442915310.1109/ACCESS.2024.336802710440582Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced RepresentationsZhoumeng Wang0https://orcid.org/0009-0000-3547-3172Marketing Programme, The Chinese University of Hong Kong Business School, Hong Kong, ChinaRecommender systems utilizing explicit feedback have witnessed significant advancements and widespread applications over the past years. However, generating recommendations in few-shot scenarios remains a persistent challenge. Recently, large language models (LLMs) have emerged as a promising solution for addressing natural language processing (NLP) tasks, thereby offering novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems. To bridge recommender systems and LLMs, we devise a prompting template that generates user and item representations based on explicit feedback. Subsequently, we integrate these LLM-processed representations into various recommendation models to evaluate their significance across diverse recommendation tasks. Our ablation experiments and case study analysis collectively demonstrate the effectiveness of LLMs in processing explicit feedback, highlighting that LLMs equipped with generative and logical reasoning capabilities can effectively serve as a component of recommender systems to enhance their performance in few-shot scenarios. Furthermore, the broad adaptability of LLMs augments the generalization potential of recommender models, despite certain inherent constraints. We anticipate that our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs’ involvement in recommender systems and contribute to the advancement of the explicit feedback-based recommender systems field.https://ieeexplore.ieee.org/document/10440582/Large language modelsrecommender systemsChatGPTrepresentations
spellingShingle Zhoumeng Wang
Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations
IEEE Access
Large language models
recommender systems
ChatGPT
representations
title Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations
title_full Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations
title_fullStr Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations
title_full_unstemmed Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations
title_short Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations
title_sort empowering few shot recommender systems with large language models enhanced representations
topic Large language models
recommender systems
ChatGPT
representations
url https://ieeexplore.ieee.org/document/10440582/
work_keys_str_mv AT zhoumengwang empoweringfewshotrecommendersystemswithlargelanguagemodelsenhancedrepresentations