We began the chapter with the evolutionary history of DFN and deep learning. We learned about the layered architecture of DFN and various aspects involved in training, such as loss function, gradient descent, backpropagation, optimizers, and regularization. Then we coded our way through our first DFN with TensorFlow as well as Keras. We began with the open source fashion MNIST data and learned the step-by-step process of building a network, right from handling data to training our model.Â
In the next chapter, we shall see the architectures of Boltzmann machines and autoencoders.