Recommendations based on visual search
We have seen a basic example of semantic similarity search, but there are other ways of generating recommendations from the item under consideration. One is by looking at its appearance. Using the pre-trained models in the PyTorch library, we can extract embeddings from images and associate them with their Hash or JSON representation in the database. The sample Python excerpt we’ll be looking at makes use of the Img2Vec wrapper library, which can be installed as follows:
pip install img2vec_pytorch
This script opens a file and produces an embedding of 1,024 numbers using the densenet model. Let’s prototype a simple application with three items in the database – a glass, a spoon, and a cup:
|
|
|
Figure 6.1 – Training images for VSS
This Python snippet of...