Applying dropout to the first fully connected layer
In this recipe, let's apply dropout to the output of the fully connected layer to reduce the chance of overfitting. The dropout step involves removing some neurons randomly during the learning process.
Getting ready
The dropout is connected to the output of the layer. Thus, model initial structure is set up and loaded. For example, in dropout current layer layer_fc1
is defined, on which dropout is applied.
How to do it...
- Create a placeholder for dropout that can take probability as an input:
keep_prob <- tf$placeholder(tf$float32)
- Use TensorFlow's dropout function to handle the scaling and masking of neuron outputs:
layer_fc1_drop <- tf$nn$dropout(layer_fc1, keep_prob)
How it works...
In steps 1 and 2, we can drop (or mask) out the output neurons based on the input probability (or percentage). The dropout is generally allowed during training and can be turned off (by assigning probability as 1
or NULL
) during testing.