Multimodal learning
In a broad sense, multimodal learning is the learning process that takes place using different modalities in the context of machine learning. A modality in machine learning is the type of data that we put into the model. Typical types of modalities include textual, visual (image and video), and auditory (sound, voice, and music) data.
A good example of such models is contrastive language-image pretraining (CLIP), which can represent textual and visual data in the same space. We can create different applications using this representation. For example, we can create vector representations of the images and text, both obtained from the same dataset and create a classifier on top. In Figure 17.1, you can see a multimodal approach to predicting phone prices using features such as the camera, RAM, battery, and an image of the device.
Figure 17.1 – Multimodal price prediction (image courtesy of https://link.springer.com/article/10.1007...