In real life, most features have different ranges, magnitudes, and units, such as age being between 0-200 and salary being between 0 to thousands or millions. From a data analyst or data scientist's point of view, how can we compare these features when they are on different scales? High-magnitude features will weigh more on machine learning models than lower magnitude features. Thankfully, feature scaling or feature normalization can solve such issues.
Feature scaling brings all the features to the same level of magnitude. This is not compulsory for all kinds of algorithms; some algorithms clearly need scaled data, such as those that rely on Euclidean distance measures such as K-nearest neighbor and the K-means clustering algorithm.
Methods for feature scaling
Now, let's look at the various methods we can use for feature scaling:
- Standard Scaling or Z-Score Normalization: This method computes the scaled values of a feature by using the mean and standard deviation...