The bigger the training dataset available for the target task, the smaller the chances of the network overfitting if we completely retrain it. Therefore, in such cases, people commonly unfreeze the latest layers of the feature extractor. In other words, the bigger the target dataset is, the more layers there are that can be safely fine-tuned. This allows the network to extract features that are more relevant to the new task, and thus to better learn how to perform it.
The model has already been through a first training phase on a similar dataset and is probably close to convergence already. Therefore, it is common practice to use a smaller learning rate for the fine-tuning phase.