How LLMs are changing recommendation systems
We saw in previous chapters how LLMs can be customized in three main ways: pre-training, fine-tuning and prompting. According to to the paper “Recommender systems in the Era of Large Language Models (LLMs)” from Wenqi Fan et al., these are also the ways you can tailor an LLM to be a recommender system.
Pre-training. Pre-training LLMs for recommender systems is an important step to enable LLMs to acquire extensive world knowledge and user preferences, and to adapt to different recommendation tasks with zero or few shots.
Note
An example of a recommendation system LLM is P5, introduced by Shijie Gang et al. in their paper “Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)”.
P5 is a unified text-to-text paradigm for building recommender systems using large language models (LLMs). It stands for Pretrain, Personalized Prompt & Predict Paradigm and...