Fine-tuning a network using the Keras API
Perhaps one of the greatest advantages of transfer learning is its ability to seize the tailwind produced by the knowledge encoded in pre-trained networks. By simply swapping the shallower layers in one of these networks, we can obtain remarkable performance on new, unrelated datasets, even if our data is small. Why? Because the information in the bottom layers is virtually universal: It encodes basic forms and shapes that apply to almost any computer vision problem.
In this recipe, we'll fine-tune a pre-trained VGG16 network on a tiny dataset, achieving an otherwise unlikely high accuracy score.
Getting ready
We will need Pillow
for this recipe. We can install it as follows:
$> pip install Pillow
We'll be using a dataset known as 17 Category Flower Dataset
, which is available here: http://www.robots.ox.ac.uk/~vgg/data/flowers/17. A version of it that's been organized into subfolders per class can be found...