Introduction
In the previous chapter, we covered regularization
techniques for neural networks. Regularization
is an important technique when it comes to combatting how a model overfits the training data and helps the model perform well on new, unseen data examples. One of the regularization techniques we covered involved L1
and L2
weight regularizations, in which penalization is added to the weights. The other regularization technique we learned about was dropout regularization
, in which some units of layers are randomly removed from the model fitting process at each iteration. Both regularization techniques are designed to prevent individual weights or units by influencing them too strongly and allowing them to generalize as well.
In this chapter, we will learn about some different evaluation techniques other than accuracy
. For any data scientist, the first step after building a model is to evaluate it, and the easiest way to evaluate a model is through its accuracy. However,...