Aniruddha Bhandari — April 3, 2020
Beginner Data Engineering Python Regression Structured Data Technique

Introduction to Feature Scaling

I was recently working with a dataset from an ML Course that had multiple features spanning varying degrees of magnitude, range, and units. This is a significant obstacle as a few machine learning algorithms are highly sensitive to these features.

I’m sure most of you must have faced this issue in your projects or your learning journey. For example, one feature is entirely in kilograms while the other is in grams, another one is liters, and so on. How can we use these features when they vary so vastly in terms of what they’re presenting?

This is where I turned to the concept of feature scaling. It’s a crucial part of the data preprocessing stage but I’ve seen a lot of beginners overlook it (to the detriment of their machine learning model).

feature_scaling

Here’s the curious thing about feature scaling – it improves (significantly) the performance of some machine learning algorithms and does not work at all for others. What could be the reason behind this quirk?

Also, what’s the difference between normalization and standardization? These are two of the most commonly used feature scaling techniques in machine learning but a level of ambiguity exists in their understanding. When should you use which technique?

I will answer these questions and more in this article on feature scaling. We will also implement feature scaling in Python to give you a practice understanding of how it works for different machine learning algorithms.

Note: I assume that you are familiar with Python and core machine learning algorithms. If you’re new to this, I recommend going through the below courses:

 

Table of Contents

  1. Why Should we Use Feature Scaling?
  2. What is Normalization?
  3. What is Standardization?
  4. The Big Question – Normalize or Standardize?
  5. Implementing Feature Scaling in Python
    • Normalization using Sklearn
    • Standardization using Sklearn
  6. Applying Feature Scaling to Machine Learning Algorithms
    • K-Nearest Neighbours (KNN)
    • Support Vector Regressor
    • Decision Tree

 

Why Should we Use Feature Scaling?

The first question we need to address – why do we need to scale the variables in our dataset? Some machine learning algorithms are sensitive to feature scaling while others are virtually invariant to it. Let me explain that in more detail.

 

Gradient Descent Based Algorithms

Machine learning algorithms like linear regression, logistic regression, neural network, etc. that use gradient descent as an optimization technique require data to be scaled. Take a look at the formula for gradient descent below:

Gradient descent formula

The presence of feature value X in the formula will affect the step size of the gradient descent. The difference in ranges of features will cause different step sizes for each feature. To ensure that the gradient descent moves smoothly towards the minima and that the steps for gradient descent are updated at the same rate for all the features, we scale the data before feeding it to the model.

Having features on a similar scale can help the gradient descent converge more quickly towards the minima.

 

Distance-Based Algorithms

Distance algorithms like KNN, K-means, and SVM are most affected by the range of features. This is because behind the scenes they are using distances between data points to determine their similarity.

For example, let’s say we have data containing high school CGPA scores of students (ranging from 0 to 5) and their future incomes (in thousands Rupees):

Feature scaling: Unscaled Knn example

Since both the features have different scales, there is a chance that higher weightage is given to features with higher magnitude. This will impact the performance of the machine learning algorithm and obviously, we do not want our algorithm to be biassed towards one feature.

Therefore, we scale our data before employing a distance based algorithm so that all the features contribute equally to the result.

Feature scaling: Scaled Knn example

The effect of scaling is conspicuous when we compare the Euclidean distance between data points for students A and B, and between B and C, before and after scaling as shown below:

  • Distance AB before scaling =>Euclidean distance
  • Distance BC before scaling =>Euclidean distance
  • Distance AB after scaling =>Euclidean distance
  • Distance BC after scaling =>Euclidean distance

Scaling has brought both the features into the picture and the distances are now more comparable than they were before we applied scaling.

 

Tree-Based Algorithms

Tree-based algorithms, on the other hand, are fairly insensitive to the scale of the features. Think about it, a decision tree is only splitting a node based on a single feature. The decision tree splits a node on a feature that increases the homogeneity of the node. This split on a feature is not influenced by other features.

So, there is virtually no effect of the remaining features on the split. This is what makes them invariant to the scale of the features!

 

What is Normalization?

Normalization is a scaling technique in which values are shifted and rescaled so that they end up ranging between 0 and 1. It is also known as Min-Max scaling.

Here’s the formula for normalization:

Normalization equation

Here, Xmax and Xmin are the maximum and the minimum values of the feature respectively.

  • When the value of X is the minimum value in the column, the numerator will be 0, and hence X’ is 0
  • On the other hand, when the value of X is the maximum value in the column, the numerator is equal to the denominator and thus the value of X’ is 1
  • If the value of X is between the minimum and the maximum value, then the value of X’ is between 0 and 1

 

What is Standardization?

Standardization is another scaling technique where the values are centered around the mean with a unit standard deviation. This means that the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation.

Here’s the formula for standardization:

Standardization equation

Feature scaling: Mu is the mean of the feature values and Feature scaling: Sigma is the standard deviation of the feature values. Note that in this case, the values are not restricted to a particular range.

Now, the big question in your mind must be when should we use normalization and when should we use standardization? Let’s find out!

 

The Big Question – Normalize or Standardize?

Normalization vs. standardization is an eternal question among machine learning newcomers. Let me elaborate on the answer in this section.

  • Normalization is good to use when you know that the distribution of your data does not follow a Gaussian distribution. This can be useful in algorithms that do not assume any distribution of the data like K-Nearest Neighbors and Neural Networks.
  • Standardization, on the other hand, can be helpful in cases where the data follows a Gaussian distribution. However, this does not have to be necessarily true. Also, unlike normalization, standardization does not have a bounding range. So, even if you have outliers in your data, they will not be affected by standardization.

However, at the end of the day, the choice of using normalization or standardization will depend on your problem and the machine learning algorithm you are using. There is no hard and fast rule to tell you when to normalize or standardize your data. You can always start by fitting your model to raw, normalized and standardized data and compare the performance for best results.

It is a good practice to fit the scaler on the training data and then use it to transform the testing data. This would avoid any data leakage during the model testing process. Also, the scaling of target values is generally not required.

 

Implementing Feature Scaling in Python

Now comes the fun part – putting what we have learned into practice. I will be applying feature scaling to a few machine learning algorithms on the Big Mart dataset I’ve taken the DataHack platform.

I will skip the preprocessing steps since they are out of the scope of this tutorial. But you can find them neatly explained in this article. Those steps will enable you to reach the top 20 percentile on the hackathon leaderboard so that’s worth checking out!

So, let’s first split our data into training and testing sets:

Before moving to the feature scaling part, let’s glance at the details about our data using the pd.describe() method:

Feature scaling: Original data

We can see that there is a huge difference in the range of values present in our numerical features: Item_Visibility, Item_Weight, Item_MRP, and Outlet_Establishment_Year. Let’s try and fix that using feature scaling!

Note: You will notice negative values in the Item_Visibility feature because I have taken log-transformation to deal with the skewness in the feature.

 

Normalization using sklearn

To normalize your data, you need to import the MinMaxScalar from the sklearn library and apply it to our dataset. So, let’s do that!

Let’s see how normalization has affected our dataset:

Feature scaling: Normalized data

All the features now have a minimum value of 0 and a maximum value of 1. Perfect!

Try out the above code in the live coding window below!!

 

Next, let’s try to standardize our data.

 

Standardization using sklearn

To standardize your data, you need to import the StandardScalar from the sklearn library and apply it to our dataset. Here’s how you can do it:

You would have noticed that I only applied standardization to my numerical columns and not the other One-Hot Encoded features. Standardizing the One-Hot encoded features would mean assigning a distribution to categorical features. You don’t want to do that!

But why did I not do the same while normalizing the data? Because One-Hot encoded features are already in the range between 0 to 1. So, normalization would not affect their value.

Right, let’s have a look at how standardization has transformed our data:

Feature scaling: Standardized data

The numerical features are now centered on the mean with a unit standard deviation. Awesome!

 

Comparing unscaled, normalized and standardized data

It is always great to visualize your data to understand the distribution present. We can see the comparison between our unscaled and scaled data using boxplots.

You can learn more about data visualization here.

Feature scaling: Normalization vs Standardization

 

You can notice how scaling the features brings everything into perspective. The features are now more comparable and will have a similar effect on the learning models.

 

Applying Scaling to Machine Learning Algorithms

It’s now time to train some machine learning algorithms on our data to compare the effects of different scaling techniques on the performance of the algorithm. I want to see the effect of scaling on three algorithms in particular: K-Nearest Neighbours, Support Vector Regressor, and Decision Tree.

 

K-Nearest Neighbours

Like we saw before, KNN is a distance-based algorithm that is affected by the range of features. Let’s see how it performs on our data, before and after scaling:

Feature scaling: K-Nearest Neighbors

You can see that scaling the features has brought down the RMSE score of our KNN model. Specifically, the normalized data performs a tad bit better than the standardized data.

Note: I am measuring the RMSE here because this competition evaluates the RMSE.

 

Support Vector Regressor

SVR is another distance-based algorithm. So let’s check out whether it works better with normalization or standardization:

Feature scaling: Support Vector Regressor

We can see that scaling the features does bring down the RMSE score. And the standardized data has performed better than the normalized data. Why do you think that’s the case?

The sklearn documentation states that SVM, with RBF kernel,  assumes that all the features are centered around zero and variance is of the same order. This is because a feature with a variance greater than that of others prevents the estimator from learning from all the features. Great!

 

Decision Tree

We already know that a Decision tree is invariant to feature scaling. But I wanted to show a practical example of how it performs on the data:

Feature scaling: Decision Tree

You can see that the RMSE score has not moved an inch on scaling the features. So rest assured when you are using tree-based algorithms on your data!

 

End Notes

This tutorial covered the relevance of using feature scaling on your data and how normalization and standardization have varying effects on the working of machine learning algorithms

Keep in mind that there is no correct answer to when to use normalization over standardization and vice-versa. It all depends on your data and the algorithm you are using.

As a next step, I encourage you to try out feature scaling with other algorithms and figure out what works best – normalization or standardization? I recommend you use the BigMart Sales data for that purpose to maintain the continuity with this article. And don’t forget to share your insights in the comments section below!

About the Author

Aniruddha Bhandari

I am on a journey to becoming a data scientist. I love to unravel trends in data, visualize it and predict the future with ML algorithms! But the most satisfying part of this journey is sharing my learnings, from the challenges that I face, with the community to make the world a better place!

Our Top Authors

Download Analytics Vidhya App for the Latest blog/Article

16 thoughts on "Feature Scaling for Machine Learning: Understanding the Difference Between Normalization vs. Standardization"

Ali
Ali says: April 12, 2020 at 11:18 pm
Excelent article! Thank you very much for sharing. I have one question. In the post you say: "It is a good practice to fit the scaler on the training data and then use it to transform the testing data.", but I didn't see that in the code you posted. Am I wrong? How would one "fit the scaler on the training data and then use it to transform the testing data"? Thanks a lot again Reply
Aniruddha Bhandari
Aniruddha Bhandari says: April 13, 2020 at 11:44 am
Hi Ali You fit the scaler on the training data so that it can calculate the necessary parameters, like mean and standard deviation for standardization, and store it for later use using the fit() method. Later you use the transform() function to apply the same transformation on both, train and test dataset. I have used this approach for both, normalization and standardization, in the article in the gists "NormalizationVsStandarization_2.py" and "NormalizationVsStandarization_3.py" respectively. I hope this cleared your doubt. Thanks Reply
sahil kamboj
sahil kamboj says: April 28, 2020 at 4:25 pm
Good article! Thank you very much for sharing. I have one question. What the difference between sklearn.preprocessing import MinMaxScaler Normalization and sklearn.preprocessing.Normalizer? When to use MinMaxScaler and when to Normalize? Reply
Aniruddha Bhandari
Aniruddha Bhandari says: April 29, 2020 at 2:15 pm
Hi I hope MinMaxScaler is already clear from the article. Normalizer is also a normalization technique. The only difference is the way it computes the normalized values. By default, it is calculating the l2 norm of the row values i.e. each element of a row is normalized by the square root of the sum of squared values of all elements in that row. As mentioned in the documentation, it is useful in text classification where the dot product of two Tf-IDF vectors gives a cosine similarity between the different sentences/documents in the dataset. Other than that, as I mentioned in the article, there is no sure way to know which scaling technique should be used when. The best way is to create multiple scaled copies of the data and then try them out and see which one gives the best result. Hope this helps. Reply
Subhash Kumar Nadar
Subhash Kumar Nadar says: May 23, 2020 at 11:25 am
Excellent article! Easy to understand and good coverage One question: I see that there is a scale() funtion as well from sklearn and short description suggest it to be similar to StandardScaler i.e. scaling to unit variance I could not find more than this explanation. Please can you suggest which on to use which scenario? Thanks in advance! Reply
Golla kedarkumar
Golla kedarkumar says: May 24, 2020 at 9:56 am
Hi ANIRUDDHA, If we use the same scaler for train and testing, does it affect the testing data because in standardization we need to use the mean of the data. If we take the mean of the train data and scale the test data, it will influence the test data, right? Reply
Aniruddha Bhandari
Aniruddha Bhandari says: May 24, 2020 at 12:11 pm
Scaling your test data according to the train data makes sure that the test data is on the same scale as the training data on which our model was trained on. This way our model will be able to apply the learnings from the training dataset on the testing dataset, which is exactly what we want! If instead, we scale the test data differently, then our model might not be able to discern that difference, thereby giving us incorrect outputs. That way we will never know how well our model is performing. I hope this helps! Reply
Aniruddha Bhandari
Aniruddha Bhandari says: May 24, 2020 at 12:41 pm
Hi Subhash I notice two differences between the two functions. First was that the scale() function allows you to standardize your data along any axis. This means that you could even standardize your data row-wise as opposed to feature-wise, which is what happens in StandardScaler(). Second difference was that the scale function has no fit and transform methods, so you cannot apply the same scaling to your test dataset. I would suggest using the StandardScaler() function as I have never used the scale() personally. I hope this helps! Reply
Inas
Inas says: May 27, 2020 at 6:42 am
Excellent article, thank you for sharing. Reply
soumadip roy
soumadip roy says: July 04, 2020 at 12:59 am
This is an excellent write up. Thanks for this. Reply
HARSHVARDHAN BHATT
HARSHVARDHAN BHATT says: July 05, 2020 at 3:03 am
That graphs really helps in putting things in perspective...thanks ! Reply
Arnob
Arnob says: July 05, 2020 at 4:13 pm
Hey bro! Great article. It covered a lots of topics that were unclear to me before. I have a basic question. How can I check my data after normalization. You have mentioned to use pd.describe() in "Normalization using sklearn' section. But when I use it I get an error - " module 'pandas' has no attribute 'describe'". Can you tell me how to check my data after normalization? Thank you for your time. Reply
deeps
deeps says: July 15, 2020 at 7:59 pm
Excellent article ! Reply
Kunal
Kunal says: July 31, 2020 at 1:19 pm
Thanks for Great Article..!!! Reply
Zineb
Zineb says: August 17, 2020 at 3:29 pm
Thanks Bhandari. Easy to understand and very helpful. Reply
Aniruddha Bhandari
Aniruddha Bhandari says: August 22, 2020 at 8:06 pm
Hi Arnob, glad you liked the article. The command you are looking for is df.describe() not pd.describe(). Try using that, it should work. Reply

Leave a Reply Your email address will not be published. Required fields are marked *