State-of-the-Art Models - Transfer Learning
Humans do not learn each and every task that they want to achieve from scratch; they usually take previous knowledge as a base in order to learn tasks much faster.
When training neural networks, there are some tasks that are extremely expensive to train for every individual, such as having hundreds of thousands of images for training and having to distinguish between two or more similar objects, ending up having a cost of days to achieve good performance, for example. These neural networks are trained to achieve this expensive task, and because neural networks are capable of saving that knowledge, then other models can take advantage of those weights to retrain specific models for similar tasks.
Transfer learning does just that – it transfers the knowledge of a pretrained model to your model, so you can take advantage of that knowledge.
So, for example, if you want to make a classifier that is capable of identifying five objects but that task seems...