Why use min max scaling instead of normalisation

Screen Link:https://app.dataquest.io/m/236/feature-selection/5/removing-low-variance-features

Here we are introduced to min max scaling. Why do we use min max scaling (values between 0 and 1 x-min/column range) and not standardisation normalisation (x-mu/sigma). These are different as you can get values greater than 1 etc though unlikely.

Why use one not the other

1 Like

Hii @danieldominey,

Both methods serve different purposes.

In min-max normalization, if data is right-skewed then it will remain right-skewed, it will just scale down the values.

Whereas in standard(z) normalization, the data would follow the normal distribution after applying the method.

In some machine learning algorithms, it is required that data is normally distributed so in that case you need to apply z-normalization.

And if you want to just scale down values for the same measure of the unit, you can apply min-max normalization. Like if you have two columns, where one represents values in thousands and another in millions then min-max can be used to represent the same unit.