Summary
In this chapter, we started with an introduction to artificial neural networks, how they are structured, and the processes by which they learn to complete a particular task. Starting with a supervised learning example, we built an artificial neural network classifier to identify objects within the CIFAR-10 dataset. We then progressed to the autoencoder architecture of neural networks and learned how we can use these networks to prepare a dataset for use in an unsupervised learning problem. Finally, we completed this investigation with autoencoders, looking at convolutional neural networks and the benefits that these additional layers can provide. This chapter prepared us well for the final installment of dimensionality reduction, when we will look at using and visualizing the encoded data with t-distributed nearest neighbors (t-SNE). T-distributed nearest neighbors provides an extremely effective method for visualizing high-dimensional data even after applying reduction techniques...