Welcome back! In the previous article, we learned about Gini impurity which we use to decide the purity of nodes. There is one more algorithm that we can use to decide the best split in decision trees and that algorithm is **Chi-square.**

*Note: If you are more interested in learning concepts in an Audio-Visual format, We have this entire article explained in the video below. If not, you may continue reading.*

Chi-square measures the statistical significance of the differences between the child nodes and their parent nodes. It is measured as the sum of squared standardized differences between observed and expected frequencies of target variable for each node and is calculated using this formula-

Let’s see how we can calculate the expected values. If you recall this is how the split on “Performance in class” looks like-

We’ve seen this before. There is a total of 20 students and out of those 10 play cricket and 10 do not. So, of course, the percent of students who do play cricket will be 50%. Now if we consider the “Above average” node here, there are 14 students in it, as the percentage of students who play cricket is 50% in the parent node as we discussed, the expected number of students who play cricket will of course be 7 and if you look at the actual value it is 8. So now we have both the values expected values and actual values.

The expected was 7 and the actual turns out to be 8, so we can infer that the expected value is calculated based on the distribution of the parent node of the same class. Similarly, the expected number of students who do not play cricket will be 7. I want you to intuitively think about this because remember the percentage of students who do not play cricket in the parent node is 50% as well and then here the actual value turns out to be 6.

Similarly for the below-average node expected to play and not play will be 3. Whereas the actual values are- 2 students play cricket and 4 do not.

Now we can calculate the chi-square using the formula mentioned above for each child node. Can you guess what will be the chi-square value if the actual and expected values are the same? It’s actually pretty simple, it will be **zero** because both the actual and expected are the same and the difference will be zero.

Now if both values are the same we can generate an inference that the distribution of the child node is the same as there is the parent node and hence we are not improving the purity of the nodes. On the other hand, if the chi-square value is high it means that the distribution of child nodes is changing with respect to the parent node and we are going in a direction to

achieve more pure nodes hence we can say that:

Higher the chi-square value more will be the purity of the nodes after a split.

Let’s look at some of the properties of chi-square before understanding how it actually works-

- Chi-square just like Gini impurity works only with categorical variables so we cannot use it for continuous targets.
- The higher the value of chi-square more the sub-nodes are different from the parent node and hence the homogeneity is more.

These are some of the properties of chi-square and you must consider these properties before choosing the right algorithm for deciding the split.

Let’s now understand the steps to calculate chi-square for a split-

- First, we need to calculate the expected values for each class.
- Then we calculate the chi-square for individual nodes using this formula that we’ve seen before-

Here actual is what we actually have as the output and expected means what we’ve already calculated.

- Finally, we calculate the chi-square for split using the sum of the chi-square of each child node for that split.

Don’t worry I’m going to show you an example and this will be extremely clear by the time you’re done with it so let’s again compare these two splits, one, of course, was on “the performance” and the other one was on “the class” they belong to and we’ll compare the chi-square statistic. This is the split on “performance in class” and the actual and expected values as we’ve already calculated earlier-

Let’s now create a table to calculate the chi-square values for this split on “performance in class”:

So the actual values for the above-average node are shown in the rable. 8 students are above average and play cricket whereas 6 do not. The expected values will be 7 for both, as again we’ve discussed before. Similarly, we’ll calculate the values for the second node which will be the below-average node. So 2 students below-average play cricket and 4 do not and as a total of students below average only 6, expected values would be 3 both.

Next, we will calculate the deviation of actual values from the expected values for both classes:

So 8 – 7 will give us 1 and 6 – 7 will give minus 1. we will do the same thing for the below-average node as well. Finally, we have to calculate the chi-square values using the formula:

We have (the actual – expected values) stored as deviations in the table. Remember these deviations are the actual – expected

values, we will square those values divided by the expected values and take the square root of those values. Let’s do that!

So when you plug in the values the chi-square comes out to be **0.38** for the above-average node and **0.58** for the below-average node.

Finally the chi-square for the split in “performance in class” will be the sum of all these chi-square values:

which as you can see here comes out to be **1.9** and this is the chi-square value for the split on “performance in class”.

Alright! similarly, we will calculate the chi-square value for the split on “the class” variable.

We will again calculate the table:

I suggest you calculate these values on your own and check your results with the values that you can see in this table. This will help you to understand how the chi-square process is working and finally, we’ll calculate the chi-square value for the split on class which will be the sum of all chi-square values:

which comes out to be around **5.36.** So what do you think we should do next? We will compare the two chi-square values and see which one is higher-

And the population will split on “the class” variable as we can see here. Remember we saw the same result using the Gini index as well? The variable “Class” came out to be the split while using Gini Index as well.

We saw the Chi-Square algorithm used for splitting the Decision Trees. This is also used for categorical targets. So we’ve covered two different algorithms so far and we saw that the results of both the algorithms have been quite similar.

*If you are looking to kick start your Data Science Journey and want every topic under one roof, your search stops here. Check out Analytics Vidhya’s Certified AI & ML BlackBelt Plus Program*

If you have any queries let me know in the comment section!

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Become a full stack data scientist
##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

Understanding Cost Function
Understanding Gradient Descent
Math Behind Gradient Descent
Assumptions of Linear Regression
Implement Linear Regression from Scratch
Train Linear Regression in Python
Implementing Linear Regression in R
Diagnosing Residual Plots in Linear Regression Models
Generalized Linear Models
Introduction to Logistic Regression
Odds Ratio
Implementing Logistic Regression from Scratch
Introduction to Scikit-learn in Python
Train Logistic Regression in python
Multiclass using Logistic Regression
How to use Multinomial and Ordinal Logistic Regression in R ?
Challenges with Linear Regression
Introduction to Regularisation
Implementing Regularisation
Ridge Regression
Lasso Regression

Introduction to Stacking
Implementing Stacking
Variants of Stacking
Implementing Variants of Stacking
Introduction to Blending
Bootstrap Sampling
Introduction to Random Sampling
Hyper-parameters of Random Forest
Implementing Random Forest
Out-of-Bag (OOB) Score in the Random Forest
IPL Team Win Prediction Project Using Machine Learning
Introduction to Boosting
Gradient Boosting Algorithm
Math behind GBM
Implementing GBM in python
Regularized Greedy Forests
Extreme Gradient Boosting
Implementing XGBM in python
Tuning Hyperparameters of XGBoost in Python
Implement XGBM in R/H2O
Adaptive Boosting
Implementing Adaptive Boosing
LightGBM
Implementing LightGBM in Python
Catboost
Implementing Catboost in Python

Introduction to Clustering
Applications of Clustering
Evaluation Metrics for Clustering
Understanding K-Means
Implementation of K-Means in Python
Implementation of K-Means in R
Choosing Right Value for K
Profiling Market Segments using K-Means Clustering
Hierarchical Clustering
Implementation of Hierarchial Clustering
DBSCAN
Defining Similarity between clusters
Build Better and Accurate Clusters with Gaussian Mixture Models

Introduction to Machine Learning Interpretability
Framework and Interpretable Models
model Agnostic Methods for Interpretability
Implementing Interpretable Model
Understanding SHAP
Out-of-Core ML
Introduction to Interpretable Machine Learning Models
Model Agnostic Methods for Interpretability
Game Theory & Shapley Values

Deploying Machine Learning Model using Streamlit
Deploying ML Models in Docker
Deploy Using Streamlit
Deploy on Heroku
Deploy Using Netlify
Introduction to Amazon Sagemaker
Setting up Amazon SageMaker
Using SageMaker Endpoint to Generate Inference
Deploy on Microsoft Azure Cloud
Introduction to Flask for Model
Deploying ML model using Flask