While performing transfer learning, we might want to update the strategy for how weights are initialized, which gradients are updated, which activation functions are to be used, and so on. For that purpose, we fine-tune the configuration. In this recipe, we will fine-tune the configuration for transfer learning.
Fine-tuning the learning configurations
How to do it...
- Use FineTuneConfiguration() to manage modifications in the model configuration:
FineTuneConfiguration fineTuneConf = new FineTuneConfiguration.Builder()
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.updater(new Nesterovs(5e-5))
.activation(Activation.RELU6)
.biasInit(0.001)
.dropOut(0.85)
.gradientNormalization(GradientNormalization.RenormalizeL2PerLayer...