Most programmers and data scientists struggle with mathematics, either having overlooked or forgotten core mathematical concepts. This book helps you understand the math that's required to understand how various neural networks work so that you can go on to building better deep learning (DL) models.
You'll begin by learning about the core mathematical and modern computational techniques used to design and implement DL algorithms. This book will cover essential topics, such as linear algebra, eigenvalues and eigenvectors, the singular value decomposition concept, and gradient algorithms, to help you understand how to train deep neural networks. Later chapters focus on important neural networks, such as the linear neural network, multilayer perceptrons, and radial basis function networks, with a primary focus on helping you learn how each model works. As you advance, you will delve into the math used for normalization, multi-layered DL, forward propagation, optimization, and backpropagation techniques to understand what it takes to build full-fledged DL models. Finally, you'll explore convolutional neural network (CNN), recurrent neural network (RNN), and generative adversarial network (GAN) models and their implementation.
By the end of this book, you'll have built a strong foundation in neural networks and DL mathematical concepts, which will help you to confidently research and build custom DL models.