First, we'll load the segmented and aligned images from the input directory --input-dir flag. While training, we'll apply preprocessing to the image. This preprocessing will add random transformations to the image, creating more images to train on.
These images will be fed in a batch size of 128 into the pre-trained model. This model will return a 128-dimensional embedding for each image, returning a 128 x 128 matrix for each batch.
After these embeddings are created, we'll use them as feature inputs into a scikit-learn SVM classifier to train on each identity.
The following command will start the process, and train the classifier. The classifier will be dumped as a pickle file in the path defined in the --classifier-path argument:
docker run -v $PWD:/facerecognition \
-e PYTHONPATH=$PYTHONPATH:/facerecognition \
-it hellorahulk/facerecognition...