Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Mathematics for Deep Learning

You're reading from   Hands-On Mathematics for Deep Learning Build a solid mathematical foundation for training efficient deep neural networks

Arrow left icon
Product type Paperback
Published in Jun 2020
Publisher Packt
ISBN-13 9781838647292
Length 364 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Jay Dawani Jay Dawani
Author Profile Icon Jay Dawani
Jay Dawani
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Essential Mathematics for Deep Learning
2. Linear Algebra FREE CHAPTER 3. Vector Calculus 4. Probability and Statistics 5. Optimization 6. Graph Theory 7. Section 2: Essential Neural Networks
8. Linear Neural Networks 9. Feedforward Neural Networks 10. Regularization 11. Convolutional Neural Networks 12. Recurrent Neural Networks 13. Section 3: Advanced Deep Learning Concepts Simplified
14. Attention Mechanisms 15. Generative Models 16. Transfer and Meta Learning 17. Geometric Deep Learning 18. Other Books You May Enjoy

What this book covers

Chapter 1, Linear Algebra, will give you an understanding of the inner workings of linear algebra, which is essential for understanding how deep neural networks work. In particular, you will learn about multi-dimensional linear equations, how matrices are multiplied together, and various methods of decomposing/factorizing matrices. These concepts will be critical for developing an intuition for how forward propagation works in neural networks.

Chapter 2, Vector Calculus, will cover all the main concepts of calculus, where you will start by learning the fundamentals of single variable calculus and build toward an understanding of multi-variable and ultimately vector calculus. The concepts of this chapter will help you better understand the math that underlies the training process of neural networks, particularly how backpropagation works.

Chapter 3, Probability and Statistics, will teach you the essentials of both probability and statistics and how they are related to each other. In particular, the focus will be on understanding different types of distributions, the importance of the central limit theorem, and how estimations are made. This chapter is critical to developing an understanding of what exactly it is that neural networks are learning.

Chapter 4, Optimization, will explain what exactly optimization is and several methods of it that are used in practice, such as least squares, gradient descent, Newton's method, and genetic algorithms. The methods covered in this chapter are essential to understanding how neural networks learn during their training phase.

Chapter 5, Graph Theory, will teach you about graph theory, which is used to model relationships between objects, and will also help in your understanding of the different types of neural network architectures. Later in the book, the concepts from this chapter will be very useful for understanding how graph neural networks work.

Chapter 6, Linear Neural Networks, will cover the most basic type of neural network and teach you how a model learns to find linear relationships from data through regression. You will also learn that this type of model has limitations, which is where the need for neural networks arises.

Chapter 7, Feedforward Neural Networks, will show you how all the concepts covered in the previous chapters are brought together to form modern-day neural networks, including coverage of how they are structured, how and what they learn, and what makes them so powerful.

Chapter 8, Regularization, will show you the various methods of regularization, such as dropout and norm penalties, that are used extensively in practice to help our models to generalize to test data so that they work well once deployed.

Chapter 9, Convolutional Neural Networks, will explain CNNs, which are a variant of feedforward neural networks and are particularly effective for tasks related to computer vision, as well as time series analysis.

Chapter 10, Recurrent Neural Networks, will explain RNNs, which are another variant of feedforward neural networks that have recurrent connections, which gives them the ability to learn relationships in sequences such as those in time series and language.

Chapter 11, Attention Mechanisms, will show a relatively recent breakthrough in deep learning known as attention. This has led to the creation of transformer models, which have resulted in phenomenal results in tasks related to natural language processing.

Chapter 12, Generative Models, is where the focus will be switched from neural networks that learn to predict classes given data to models that learn to synthetically create data. You will learn about various models, such as autoencoders, GANs, and flow-based networks.

Chapter 13, Transfer and Meta Learning, will teach you about two separate but related concepts known as transfer learning and meta learning. Their goals respectively are to transfer what one model has learned to another to help it work on a similar task and to create networks that can use existing knowledge to learn new tasks or learn how to learn.

Chapter 14, Geometric Deep Learning, will explain another relatively new concept in DL, which extends the power of deep neural networks from the Euclidean domain to the non-Euclidean domain.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime