Multilayer perceptrons are the simplest form of neural networks. They are feedforward without the feedback loops of recurrent neural networks, and all hidden layers are dense, fully connected layers, unlike convolutional neural networks, which feature convolutional layers and pooling layers. Given their simplicity, there are fewer options to adjust; however, in this chapter, we focused on adjusting the nodes in the hidden layer and looked at adding additional layers, as this aspect is the main element that separates neural network models, and as such, all deep learning methods from other machine learning algorithms. Using all the code in this chapter, you have learned how to process data so that it was ready to model, how to select the optimal number of nodes and layers, and how to train and evaluate a model using the mxnet library for R.
In the next chapter, you will...