Each feature in our data corresponds to a dimension in our problem space. Minimizing the number of features to make our problem space simpler is called dimensionality reduction. It can be done in one of the following two ways:
Feature selection: Selecting a set of features that are important in the context of the problem we are trying to solve
Feature aggregation: Combining two or more features to reduce dimensions using one of the following algorithms:
PCA: A linear unsupervised ML algorithm
Linear discriminant analysis (LDA): A linear supervised ML algorithm
Kernel principal component analysis: A nonlinear algorithm
Let's look deeper at one of the popular dimensionality reduction algorithms, namely PCA, in more detail.