Weights and biases form an integral part of any deep neural network optimization and here we define a couple of functions to automate these initializations. It is a good practice to initialize weights with small noise to break symmetry and prevent zero gradients. Additionally, a small positive initial bias would avoid inactivated neurons, suitable for ReLU activation neurons.
Using functions to initialize weights and biases
Getting ready
Weights and biases are model coefficients which need to be initialized before model compilation. This steps require the shape parameter to be determined based on input dataset.