This article was published as a part of the Data Science Blogathon.
One of the most used matrices for measuring model performance is predictive errors. The components of any predictive errors are Noise, Bias, and Variance. This article intends to measure the bias and variance of a given model and observe the behavior of bias and variance w.r.t various models such as Linear Regression, Decision Tree, Bagging, and Random Forest for a given number of sample sizes.
1. Understanding Bias and Variance
2. Algorithms such as Linear Regression, Decision Tree, Bagging with Decision Tree, Random Forest, and Ridge Regression
Bias: Difference between the prediction of the true model and the average models (models build on n number of samples obtained from the population).
True Model: Model builds on a population data
Average Model: Average of all the prediction results obtained from the various sample obtained from the population model.
Variance: Difference between the prediction of all the models obtained from the sample with the average model.
Noise: It is the irreducible error that a model cannot predict.



Linear Regression  High Bias  Less Variance 
Decision Tree  Low Bias  High Variance 
Bagging  Low Bias  High Variance, lesser than Decision tree 
Random Forest  Low Bias  High Variance, lesser than Decision tree and Bagging 
Practically it is very difficult and expensive to obtain population data. Without the knowledge of population data, it is not possible to compute the exact bias and variance of a given model. Although the changes in bias and variance can be realized on the behavior of train and test error of a given model.
So, to perform this experiment, we will consider a large dataset to be population. Based on this assumption we will proceed in calculating the bias and variance of the various model on this dataset.
For this example, I am considering a random dataset (dataset is not picked by any criteria).
You can download the dataset from here.
This is a data set of Physicochemical Properties of Protein Tertiary Structure. The data set is taken from CASP 59. There are 45730 decoys and sizes varying from 0 to 21 Armstrong.
Attribute: RMSDSize of the residue.
F1 – Total surface area.
F2 – Nonpolar exposed area.
F3 – Fractional area of exposed nonpolar residue.
F4 – Fractional area of an exposed nonpolar part of the residue.
F5 – Molecular massweighted exposed area.
F6 – Average deviation from the standard exposed area of residue.
F7 – Euclidian distance.
F8 – Secondary structure penalty.
F9 – Spatial Distribution constraints (N, K Value).
This dataset contains 45730 number of records.
Superset of all data (Practically it’s not possible, but for the sake of experiment we are considering a large set of data to be our population). In this experiment, we consider a data set with 45730 records as a population.
1500 records are extracted from the Population_Data as Test data.
Data from the population other than test data are considered as Training_Data.
The model built on the Population_Data.
Consider the ‘n’ number of samples being extracted from Training_Data. We build models on each of these samples. For a given value of x, the mean prediction of these models is considered as predictions of the Mean_Model for that value of x.
Bias for each value of x from test data = (Prediction of Population_Model – Prediction of Mean_Model).
The bias of the model = Mean (abs (Prediction of Population_Model – Prediction of Mean_Model))
The variance of the model = Variance (Prediction of Mean_Model, Prediction of Sample_Model)
i.e. Difference between the prediction of each model obtained from various samples and the prediction value of the mean model followed by squared and mean of the obtained value. Gives us the info of how much the models from the sample vary from the mean model.
1) Considering a data set of 45730 as Population_Data
2) Extracting Test_Data of 1500 records from Population_Data. So, the remaining data is considered to be Training_Data
3) Build Population_Model. Collect predictions from the Population_Model using Test_Data
4) Build Mean_Model.
30 random samples have been extracted from Training_Data. Models are built on each of these samples. The mean predictions using Test_Data of these models are collected.
5) Compute Model_Bias
The bias of the model = Mean (abs(Prediction of Population_Model – Prediction of Mean_Model))
6) Compute Model_Variance:
Model_Variance = Var (Prediction of Mean_Model, Prediction of Sample_Model)
The code for the belowgenerated results is available in this GitHub link.
The Model_Bias and Model_Variance are being collected for different algorithms such as Linear Regression, Decision Tree, Bagging, Random Forest.
Observations: (For a sample size of 8000)
Bias and Variance for sample sizes:[100, 500, 1000, 2000, 4000, 8000, 10000]
It could be observed that the increase in the sample size aids in a decrease in Bias and Variance. But often it is quite expensive to obtain data with a higher sample size. So, increasing the sample size might not be a viable solution for reducing the bias and variance of the model.
1. James, G.; Witten, D.; Hastie, T. & Tibshirani, R. (2013), An Introduction to Statistical Learning: with Applications in R, Springer.
2. Markgraf, Bert. “How to Calculate Bias” sciencing.com, https://sciencing.com/howtocalculatebias13710241.html. 8 September 2020.
3. Srivastava, P. (2018, September 23). End your bias about bias and variance!! Medium. https://towardsdatascience.com/endyourbiasaboutbiasandvariance67b16f0eb1e6
4. Bias Variance Tradeoff image is taken from https://gadictos.com/biasvariance/
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
What does the model do? Is it a classifier? Supervised? Unsupervised? Is it picking people with certain diseases or not? How does one make sense of results without any idea of what the model is supposed to tell?
What does the model do? Is it a classifier? Supervised? Unsupervised? Is it picking people with certain diseases or not? How does one make sense of results without any idea of what the model is supposed to tell?