Standardization VS Normalization

Zaid Alissa Almaliki
2 min readJul 27, 2018
Feature Scaling via StackExchange

Standardization

Standardization (or Z-score normalization) is the process of rescaling the features so that they’ll have the properties of a Gaussian distribution with

μ=0 and σ=1

where μ is the mean and σ is the standard deviation from the mean; standard scores (also called z scores) of the samples are calculated as follows:

Normalization

Normalization often also simply called Min-Max scaling basically shrinks the range of the data such that the range is fixed between 0 and 1 (or -1 to 1 if there are negative values). It works better for cases in which the standardization might not work so well. If the distribution is not Gaussian or the standard deviation is very small, the min-max scaler works better.

Normalization is typically done via the following equation:

Min-Max Scaling image via Chris Albon

Use Cases

Some examples of algorithms where feature scaling is important are:

  • K-nearest neighbors with a Euclidean distance measure if want all features to contribute equally.
  • Logistic regression, SVM, perceptrons, neural networks.
  • K-means.
  • Linear discriminant analysis, principal component analysis, kernel principal component analysis.

Graphical-model based classifiers, such as Fisher LDA or Naive Bayes, as well as Decision trees and Tree-based ensemble methods such Random Forest are invariant to feature scaling, but still it might be a good idea to rescale the data.

Drawbacks

Normalizing the data is sensitive to outliers, so if there are outliers in the data set it is a bad practice. Standardization creates a new data not bounded (unlike normalization).

This is a simple example in Python to understand how Standardization is working on Sonar dataset.

Conclusion

Standardization and Normalization are created to achieve a similar target, which is to build features that have similar ranges to each other and widely used in data analysis to help the programmer to get some clue out of the raw data. In statistics, Standardization is the subtraction of the mean and then dividing by its standard deviation. In Algebra, Normalization is the process of dividing of a vector by its length and it transforms your data into a range between 0 and 1.

Alternatively, you can get a Medium subscription for $5/month. If you use this link, it will support me.

--

--

Zaid Alissa Almaliki

Founder, Principal Data Engineer and Cloud Architect Consultant in DataAkkadian.