Feature selection
When we are using features to predict a target, some of the features will be more important than others. For example, if we are predicting whether someone will default on a loan, their credit score will be a much better predictor of default than their height and weight. While we can use a large number of features as inputs to ML models, it's much better to minimize the number of features we're using with feature selection methods. ML algorithms take computational power and time to run, and the simpler and more effective we can make the input features, the better. With feature selection, we can screen our inputs for those that have the most promise. These features should have some relationship to the target variable so that we can predict the target with the features. Then we can throw out variables that aren't going to help predict the target.
The curse of dimensionality
Feature selection is related to a concept called "the curse of dimensionality...