Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
A Handbook of Mathematical Models with Python

You're reading from   A Handbook of Mathematical Models with Python Elevate your machine learning projects with NetworkX, PuLP, and linalg

Arrow left icon
Product type Paperback
Published in Aug 2023
Publisher Packt
ISBN-13 9781804616703
Length 144 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Ranja Sarkar Ranja Sarkar
Author Profile Icon Ranja Sarkar
Ranja Sarkar
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Part 1:Mathematical Modeling
2. Chapter 1: Introduction to Mathematical Modeling FREE CHAPTER 3. Chapter 2: Machine Learning vis-à-vis Mathematical Modeling 4. Part 2:Mathematical Tools
5. Chapter 3: Principal Component Analysis 6. Chapter 4: Gradient Descent 7. Chapter 5: Support Vector Machine 8. Chapter 6: Graph Theory 9. Chapter 7: Kalman Filter 10. Chapter 8: Markov Chain 11. Part 3:Mathematical Optimization
12. Chapter 9: Exploring Optimization Techniques 13. Chapter 10: Optimization Techniques for Machine Learning 14. Index 15. Other Books You May Enjoy

Linear algebra for PCA

PCA is an unsupervised method used to reduce the number of features of a high-dimensional dataset. An unlabeled dataset is reduced into its constituent parts by matrix factorization (or decomposition) followed by ranking of these parts in accordance with variances. The projected data representative of the original data becomes the input to train ML models.

PCA is defined as the orthogonal projection of data onto a lower dimensional linear space called the principal subspace, done by finding new axes (or basis vectors) that preserve the maximum variance of projected data; the new axes or vectors are known as principal components. PCA preserves the information by considering the variance of projection vectors: the highest variance lies on the first axis, the second highest on the second axis, and so forth. The working principle of the linear transformation called PCA is shown in Figure 3.2. It compresses the feature space by identifying a subspace that captures...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime