This chapter began by showing you how to program a neural network from scratch. We demonstrated the neural network in a web application created by just using R code. We delved into how the neural network actually worked, showing how to code forward-propagation, cost functions, and backpropagation. Then we looked at how the parameters for our neural network apply to modern deep learning libraries by looking at the mx.model.FeedForward.create function from the mxnet deep learning library.
Then we covered overfitting, demonstrating several approaches to preventing overfitting, including common penalties, the Ll penalty and L2 penalty, ensembles of simpler models, and dropout, where variables and/or cases are dropped to make the model noisy. We examined the role of penalties in regression problems and neural networks. In the next chapter, we will move into deep learning and...