The objective of fine-tuning is to improve the accuracy of the model, better discriminating between classes. It aims to find the optimal values of the weights between layers. Fine-tuning slightly tweaks the original features in order to obtain more precise boundaries of the classes.
A small labelled dataset is used for fine-tuning, as this helps the model to associate patterns and features to the datasets. Back propagation is a method used to fine-tune and helps the model to generalize better.
Once we have identified some reasonable feature detectors, backward propagation only needs to perform a local search.
Fine-tuning can be applied as a stochastic bottom-up pass and adjust the top-down weights. Once the top is reached, recursion is applied to the top layer. In order to fine-tune further, we can do a stochastic top-down pass and adjust the bottom-up weights.
...