Abstract
The personalized recommendation aims to address the information overload problem, which can find interesting items for users from massive amounts of information. The research paradigm of personalized recommendation evolved from deep neural networks to pre-trained language models (PLMs) like BERT and, more recently, into large language models (LLMs). However, it is always very difficult to find the target item among a massive number of data or information, which is not only time-consuming but also often has low accuracy. In this paper, we propose a Personalized Recommendation method with Clustering via Prompt-tuning (PRCP), a candidate item set is developed and a prompt-tuning model with a designed verbalizer is constructed for recommendation. Specifically, the target users are first selected by the similarity calculation, and items are then clustered by the preferences of similar users to form a candidate item set. Then the prompt-tuning model is introduced to predict the masked label for candidate items, and three different strategies are designed to expand the label word space for verbalizer optimization. Extensive experiments conducted on three datasets validated the effectiveness of the proposed method compared to other state-of-the-art baselines including LLMs.
Get full access to this article
View all access options for this article.
