Finally, what if the target task is so specific that training samples are barely available and using pretrained weights does not make much sense? First, it would be necessary to reconsider applying or repurposing a deep model. Training such a model on a small dataset would lead to overfitting, and a deep pretrained extractor would return features that are too irrelevant for the specific task. However, we can still benefit from transfer learning if we keep in mind that the first layers of CNNs react to low-level features. Instead of only removing the final prediction layers of a pretrained model, we can also remove some of the last convolutional blocks, which are too task-specific. A shallow classifier can then be added on top of the remaining layers, and the new model can finally be fine-tuned.
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Ukraine
Luxembourg
Estonia
Lithuania
South Korea
Turkey
Switzerland
Colombia
Taiwan
Chile
Norway
Ecuador
Indonesia
New Zealand
Cyprus
Denmark
Finland
Poland
Malta
Czechia
Austria
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Netherlands
Bulgaria
Latvia
South Africa
Malaysia
Japan
Slovakia
Philippines
Mexico
Thailand