Over the last 12 months, I have been participating in a number of machine learning hackathons on Analytics Vidhya and Kaggle competitions. After the competition, I always make sure to go through the winnerâ€™s solution. The winner’s solution usually provide me critical insights, which have helped me immensely in future competitions.
Most of the winners rely on an ensemble of well-tuned individual models along with feature engineering. If you are starting with machine learning, I would advise you to lay emphasis on these two areas asÂ I have found them equally important toÂ do well in a machine learning.
Most of the time, I was able to crack the feature engineering part but probably didnâ€™t use the ensemble of multiple models.Â If you are a beginner, itâ€™s even better to get familiar with ensembling as early as possible. Chances are that you are already applying it without knowing!
In this article, Iâ€™ll take you through the basics of ensemble modeling. Then I will walk you through the advantages of ensembling. Also, to provide youÂ hands-on experience on ensemble modeling, we will use ensembling on a hackathon problem using R.
P.S. For this article, we will assume that you can build individual models in R / Python. If not, you can start your journey with our learning path.
In general, ensembling is a technique of combining two or more algorithms of similar or dissimilar types called base learners. This is done to make a more robust system which incorporates the predictions from all the base learners. It can be understood as conference room meeting between multiple traders to make a decision on whether the price of a stock will go up or not.
Since all of them have a different understanding of the stock market and thus a different mapping function from the problem statement to the desired outcome. Therefore, they are supposed to make varied predictions on the stock price based on their own understandings of the market.
Now we can take all of these predictions into account while making the final decision. This will make our final decision more robust, accurate and less likely to be biased. The final decision would have been opposite if one of these traders would have made this decision alone.
You can consider another example ofÂ a candidate going through multiple rounds of job interviews. The final decision of candidate’s ability is generallyÂ taken based on the feedback of all theÂ interviewers. Although a single interviewer might notÂ be able to test the candidateÂ for each required skillÂ and trait. But the combined feedback of multiple interviewersÂ usually helps in better assessment of the candidate.
Some of the basic concepts which you should be aware of before we go into further detail are:
Practically speaking, there can be a countless number of ways in which you can ensemble different models. But these are some techniques that are mostly used:
For bootstrapped sample, we choose one out of these three randomly. Say we chose Row 2.
You see that even though Row 2 is chosen from the data to the bootstrap sample, it’s still present in the data. Now, each of the three:
Rows have the same probability of being selected again. Letâ€™s say we choose Row 1 this time.
Again, each row in the data has the same probability to be chosen for Bootstrapped sample. Letâ€™s say we randomly choose Row 1 again.
Thus, we can have multiple bootstrapped samples from the same data. Once we have these multiple bootstrapped samples, we can grow trees for each of these bootstrapped samples and use the majority vote or averaging concepts to get the final prediction. This is how bagging works.
One important thing to note here is that itâ€™s done mainly to reduce the variance. Now, random forest actually uses this concept but it goes a step ahead to further reduce the variance by randomly choosing a subset of features as well for each bootstrapped sample to make the splits while training.
It relies on creating a series of weak learners each of which might not be good for the entire dataset but is good for some part of the dataset. Thus, each model actually boosts the performance of the ensemble.
Itâ€™s really important to note that boosting is focused on reducing the bias. This makes the boosting algorithms prone to overfitting. Thus, parameter tuning becomes a crucial part of boosting algorithms to make them avoid overfitting.
Some examples of boosting are XGBoost, GBM, ADABOOST, etc.
Letâ€™s understand it with an example:
Here, we have two layers of machine learning models:
Here, we have used only two layers but it can be any number of layers and any number of models in each layer. Two of the key principles for selecting the models:
One thing that you might have realized is that we have used the top layer model which takes as input the predictions of the bottom layer models. This top layer model can also be replaced by many other simpler formulas like:
I believe you would have a good grasp on ensembling concepts by now. Well, enough of theory now, letâ€™s get down to implementing ensembling and see whether it can help us improve our accuracy for a real machine learning challenge. If you wish to read more about the basics of ensembling, then you can refer to this resource.
For the purpose of implementing ensembling, I have chosen Loan Prediction problem. We have to predict whether the bank should approve the loan based on the applicant profile or not. Itâ€™s a binary classification problem. You can read more about the problem here.
Iâ€™ll be using caret package in R for training various individual models. Itâ€™s the goto package for modeling in R. Donâ€™t worry if you are not familiar with the caret package, you can get through this article to get the comprehensive knowledge of caret package. Letâ€™s get done with getting the data and data cleaning part.
#Loading the required libraries library('caret') #Seeting the random seed set.seed(1) #Loading the hackathon dataset data<-read.csv(url('https://datahack-prod.s3.ap-south-1.amazonaws.com/train_file/train_u6lujuX_CVtuZ9i.csv')) #Let's see if the structure of dataset data str(data) 'data.frame':Â Â Â Â Â Â Â Â Â Â Â 614 obs. ofÂ 13 variables: $ Loan_IDÂ Â Â Â Â Â Â Â Â : Factor w/ 614 levels "LP001002","LP001003",..: 1 2 3 4 5 6 7 8 9 10 ... $ GenderÂ Â Â Â Â Â Â Â Â Â : Factor w/ 3 levels "","Female","Male": 3 3 3 3 3 3 3 3 3 3 ... $ MarriedÂ Â Â Â Â Â Â Â Â : Factor w/ 3 levels "","No","Yes": 2 3 3 3 2 3 3 3 3 3 ... $ DependentsÂ Â Â Â Â Â : Factor w/ 5 levels "","0","1","2",..: 2 3 2 2 2 4 2 5 4 3 ... $ EducationÂ Â Â Â Â Â Â : Factor w/ 2 levels "Graduate","Not Graduate": 1 1 1 2 1 1 2 1 1 1 ... $ Self_EmployedÂ Â Â : Factor w/ 3 levels "","No","Yes": 2 2 3 2 2 3 2 2 2 2 ... $ ApplicantIncomeÂ : intÂ 5849 4583 3000 2583 6000 5417 2333 3036 4006 12841 ... $ CoapplicantIncome: numÂ 0 1508 0 2358 0 ... $ LoanAmountÂ Â Â Â Â Â : intÂ NA 128 66 120 141 267 95 158 168 349 ... $ Loan_Amount_Term : intÂ 360 360 360 360 360 360 360 360 360 360 ... $ Credit_HistoryÂ Â : intÂ 1 1 1 1 1 1 1 0 1 1 ... $ Property_AreaÂ Â Â : Factor w/ 3 levels "Rural","Semiurban",..: 3 1 3 3 3 3 3 2 3 2 ... $ Loan_Status Â Â Â Â Â : Factor w/ 2 levels "N","Y": 2 1 2 2 2 2 2 1 2 1 ... #Does the data contain missing values sum(is.na(data)) [1] 86 #Imputing missing values using median preProcValues <- preProcess(data, method = c("medianImpute","center","scale")) library('RANN') data_processed <- predict(preProcValues, data) sum(is.na(data_processed)) [1] 0
#Spliting training set into two parts based on outcome: 75% and 25% index <- createDataPartition(data_processed$Loan_Status, p=0.75, list=FALSE) trainSet <- data_processed[ index,] testSet <- data_processed[-index,]
I have divided the data into two parts which Iâ€™ll be using to simulate the training and testing operations. We now define the training controls and the predictor and outcome variables:
#Defining the training controls for multiple models fitControl <- trainControl( Â method = "cv", Â number = 5, savePredictions = 'final', classProbs = T) #Defining the predictors and outcome predictors<-c("Credit_History", "LoanAmount", "Loan_Amount_Term", "ApplicantIncome", Â "CoapplicantIncome") outcomeName<-'Loan_Status'
Now letâ€™s get started with training a random forest and test its accuracy on the test set that we have created:
#Training the random forest model model_rf<-train(trainSet[,predictors],trainSet[,outcomeName],method='rf',trControl=fitControl,tuneLength=3) #Predicting using random forest model testSet$pred_rf<-predict(object = model_rf,testSet[,predictors]) #Checking the accuracy of the random forest model confusionMatrix(testSet$Loan_Status,testSet$pred_rf) Confusion Matrix and Statistics Reference PredictionÂ NÂ Y N 28 20 YÂ 9 96 Accuracy : 0.8105Â Â Â Â Â Â Â Â Â 95% CI : (0.7393, 0.8692) No Information Rate : 0.7582Â Â Â Â Â Â Â Â Â P-Value [Acc > NIR] : 0.07566Â Â Â Â Â Â Â Â Kappa : 0.5306Â Â Â Â Â Â Â Â Â Mcnemar's Test P-Value : 0.06332Â Â Â Â Â Â Â Â Sensitivity : 0.7568Â Â Â Â Â Â Â Â Â Specificity : 0.8276Â Â Â Â Â Â Â Â Â Pos Pred Value : 0.5833Â Â Â Â Â Â Â Â Â Neg Pred Value : 0.9143Â Â Â Â Â Â Â Â Â Prevalence : 0.2418Â Â Â Â Â Â Â Â Â Detection Rate : 0.1830Â Â Â Â Â Â Â Â Â Detection Prevalence : 0.3137Â Â Â Â Â Â Â Â Â Balanced Accuracy : 0.7922Â Â Â Â Â Â Â Â Â 'Positive' Class : N
Well, as you can see, we got 0.81 accuracy with the individual random forest model. Letâ€™s see the performance of KNN:
#Training the knn model model_knn<-train(trainSet[,predictors],trainSet[,outcomeName],method='knn',trControl=fitControl,tuneLength=3) #Predicting using knn model testSet$pred_knn<-predict(object = model_knn,testSet[,predictors]) #Checking the accuracy of the random forest model confusionMatrix(testSet$Loan_Status,testSet$pred_knn) Confusion Matrix and Statistics Reference PredictionÂ Â NÂ Â Y NÂ 29Â 19 YÂ Â 2 103 Accuracy : 0.8627Â Â Â Â Â Â Â Â 95% CI : (0.7979, 0.913) No Information Rate : 0.7974Â Â Â Â Â Â Â Â P-Value [Acc > NIR] : 0.0241694Â Â Â Â Â Kappa : 0.6473Â Â Â Â Â Â Â Â Mcnemar's Test P-Value : 0.0004803Â Â Â Â Â Sensitivity : 0.9355Â Â Â Â Â Â Â Â Specificity : 0.8443Â Â Â Â Â Â Â Â Pos Pred Value : 0.6042Â Â Â Â Â Â Â Â Neg Pred Value : 0.9810Â Â Â Â Â Â Â Â Prevalence : 0.2026Â Â Â Â Â Â Â Â Detection Rate : 0.1895Â Â Â Â Â Â Â Â Detection Prevalence : 0.3137Â Â Â Â Â Â Â Â Balanced Accuracy : 0.8899Â Â Â Â Â Â Â Â 'Positive' Class : N
Itâ€™s great since we are able to get 0.86 accuracy with the individual KNN model. Letâ€™s see the performance of Logistic regression as well before we go on to create ensemble of these three.
#Training the Logistic regression model model_lr<-train(trainSet[,predictors],trainSet[,outcomeName],method='glm',trControl=fitControl,tuneLength=3) #Predicting using knn model testSet$pred_lr<-predict(object = model_lr,testSet[,predictors]) #Checking the accuracy of the random forest model confusionMatrix(testSet$Loan_Status,testSet$pred_lr) Confusion Matrix and Statistics Reference PredictionÂ Â NÂ Â Y NÂ 29Â 19 YÂ Â 2 103 Accuracy : 0.8627Â Â Â Â Â Â Â Â 95% CI : (0.7979, 0.913) No Information Rate : 0.7974Â Â Â Â Â Â Â Â P-Value [Acc > NIR] : 0.0241694Â Â Â Â Â Kappa : 0.6473Â Â Â Â Â Â Â Â Mcnemar's Test P-Value : 0.0004803Â Â Â Â Â Sensitivity : 0.9355Â Â Â Â Â Â Â Â Specificity : 0.8443Â Â Â Â Â Â Â Â Pos Pred Value : 0.6042Â Â Â Â Â Â Â Â Neg Pred Value : 0.9810Â Â Â Â Â Â Â Â Prevalence : 0.2026Â Â Â Â Â Â Â Â Detection Rate : 0.1895Â Â Â Â Â Â Â Â Detection Prevalence : 0.3137Â Â Â Â Â Â Â Â Balanced Accuracy : 0.8899Â Â Â Â Â Â Â Â 'Positive' Class : N
And the logistic regression also gives us the accuracy of 0.86.
Now, letâ€™s try out different ways of forming an ensemble with these models as we have discussed:
#Predicting the probabilities testSet$pred_rf_prob<-predict(object = model_rf,testSet[,predictors],type='prob') testSet$pred_knn_prob<-predict(object = model_knn,testSet[,predictors],type='prob') testSet$pred_lr_prob<-predict(object = model_lr,testSet[,predictors],type='prob') #Taking average of predictions testSet$pred_avg<-(testSet$pred_rf_prob$Y+testSet$pred_knn_prob$Y+testSet$pred_lr_prob$Y)/3 #Splitting into binary classes at 0.5 testSet$pred_avg<-as.factor(ifelse(testSet$pred_avg>0.5,'Y','N'))
#The majority vote testSet$pred_majority<-as.factor(ifelse(testSet$pred_rf=='Y' & testSet$pred_knn=='Y','Y',ifelse(testSet$pred_rf=='Y' & testSet$pred_lr=='Y','Y',ifelse(testSet$pred_knn=='Y' & testSet$pred_lr=='Y','Y','N'))))
#Taking weighted average of predictions testSet$pred_weighted_avg<-(testSet$pred_rf_prob$Y*0.25)+(testSet$pred_knn_prob$Y*0.25)+(testSet$pred_lr_prob$Y*0.5) #Splitting into binary classes at 0.5 testSet$pred_weighted_avg<-as.factor(ifelse(testSet$pred_weighted_avg>0.5,'Y','N'))
Before proceeding further, I would like you to recall about two important criteria that we previously discussed on individual model accuracy and inter-model prediction correlation which must be fulfilled. In the above ensembles, I have skipped checking for the correlation between the predictions of the three models. I have randomly chosen these three models for a demonstration of the concepts. If the predictions are highly correlated, then using these three might not give better results than individual models. But you got the point. Right?
So far, we have used simple formulas at the top layer. Instead, we can use another machine learning model which is essentially what stacking is. We can use linear regression for making a linear formula for making the predictions in regression problem for mapping bottom layer model predictions to the outcome or logistic regression similarly in case of classification problem.
Moreover, we donâ€™t need to restrict ourselves here, we can also use more complex models like GBM, neural nets to develop a non-linear mapping from the predictions of bottom layer models to the outcome.
On the same example letâ€™s try applying logistic regression and GBM as top layer models. Remember, the following steps that weâ€™ll take:
One extremely important thing to note in step 2 is that you should always make out of bag predictions for the training data, otherwise the importance of the base layer models will only be a function of how well a base layer model can recall the training data.
Even, most of the steps have been already done previously, but Iâ€™ll walk you through the steps one by one again.
#Defining the training control fitControl <- trainControl( method = "cv", number = 10, savePredictions = 'final', # To save out of fold predictions for best parameter combinantions classProbs = T # To save the class probabilities of the out of fold predictions ) #Defining the predictors and outcome predictors<-c("Credit_History", "LoanAmount", "Loan_Amount_Term", "ApplicantIncome", "CoapplicantIncome") outcomeName<-'Loan_Status' #Training the random forest model model_rf<-train(trainSet[,predictors],trainSet[,outcomeName],method='rf',trControl=fitControl,tuneLength=3 #Training the knn model model_knn<-train(trainSet[,predictors],trainSet[,outcomeName],method='knn',trControl=fitControl,tuneLength=3) #Training the logistic regression model model_lr<-train(trainSet[,predictors],trainSet[,outcomeName],method='glm',trControl=fitControl,tuneLength=3)
#Predicting the out of fold prediction probabilities for training data trainSet$OOF_pred_rf<-model_rf$pred$Y[order(model_rf$pred$rowIndex)] trainSet$OOF_pred_knn<-model_knn$pred$Y[order(model_knn$pred$rowIndex)] trainSet$OOF_pred_lr<-model_lr$pred$Y[order(model_lr$pred$rowIndex)] #Predicting probabilities for the test data testSet$OOF_pred_rf<-predict(model_rf,testSet[predictors],type='prob')$Y testSet$OOF_pred_knn<-predict(model_knn,testSet[predictors],type='prob')$Y testSet$OOF_pred_lr<-predict(model_lr,testSet[predictors],type='prob')$Y
First, letâ€™s start with the GBM model as the top layer model.
#Predictors for top layer models predictors_top<-c('OOF_pred_rf','OOF_pred_knn','OOF_pred_lr') #GBM as top layer model model_gbm<- train(trainSet[,predictors_top],trainSet[,outcomeName],method='gbm',trControl=fitControl,tuneLength=3)
Similarly, we can create an ensemble with logistic regression as the top layer model as well.
#Logistic regression as top layer model model_glm<- train(trainSet[,predictors_top],trainSet[,outcomeName],method='glm',trControl=fitControl,tuneLength=3)
#predict using GBM top layer model testSet$gbm_stacked<-predict(model_gbm,testSet[,predictors_top]) #predict using logictic regression top layer model testSet$glm_stacked<-predict(model_glm,testSet[,predictors_top])
Great! You made your first ensemble.
Note itâ€™s really important to choose the models for the ensemble wisely to get the best out of the ensemble. The two thumb rules that we discussed will greatly help you in that.
By now, you might have developed an in-depth conceptual as well as practical knowledge of ensembling. I would like to encourage you to practice this on machine learning hackathons on Analytics Vidhya, which you can find here.
Youâ€™ll probably find this article on top five questions related to ensembling helpful.
Also, if you missed out on the skilltest on ensembling, you can check your understanding of ensembling concepts here.
Ensembling is a very popular and effective technique that is very frequently used by data scientists for beating the accuracy benchmark of even the best of individual algorithms. More often than not itâ€™s the winning recipe in hackathons. The more youâ€™ll use ensembling, the more youâ€™ll admire its beauty.
Did you enjoy reading this article?Â Do share your views in the comment section below. If you have any doubts / questions feel free to drop them in the comments below.
Lorem ipsum dolor sit amet, consectetur adipiscing elit,