In practice, we often do not have enough data to train a CNN from scratch with random initialization. Transfer learning is a ML technique that re-purposes a model trained on one set of data for another task. Naturally, it works if the learning from the first task carries over to the task of interest. If successful, it can lead to better performance and faster training, which requires less labeled data than training a neural network from scratch on the target task.
Transfer learning – faster training with less data
How to build on a pre-trained CNN
The transfer learning approach to CNN relies on pre-training on a very large dataset such as ImageNet. The goal is that the convolutional filters extract a feature representation...