Self-supervised learning (SSL)
Self-supervised learning (SSL) is not a new concept. It's similar to RL, but it gained attention after its combination with deep learning due to its effectiveness in learning data representations. Examples of such models are Word2vec for language modeling (Mikolov et al., 2013) and Meta’s RoBERTa models trained using SSL, which achieved state-of-the-art performance on several language modeling tasks. The idea of SSL is to define an objective for the machine learning model that doesn’t rely on pre-labeling or the quantification of data points – for example, predicting the positions of objects or people in videos for each time step using previous time steps, masking parts of images or sequence data, and aiming to refill those masked sections. One of the widely used applications of such models is in RL to learn representations of images and text, and then use those representations in other contexts, for example, in supervised modeling...