Summary
In this chapter, we learned about two linear algebraic methods used to reduce the dimensionality of data: namely, principal component analysis and linear discriminant analysis. The focus was on PCA, which is an unsupervised method to reduce the feature space of high-dimensional data and to know why this reduction is necessary for solving business problems. We did a detailed study of the mathematics behind the algorithm and how it works in ML models. We also learned about a couple of important applications of PCA along with the Python code.
In the next chapter, we will learn about an optimization method called Gradient Descent, which is arguably the most common (and popular) algorithm to optimize neural networks. It is a learning algorithm that works by minimizing a given cost function. As the name suggests, it uses a gradient (derivative) iteratively to minimize the function.