Learn everything about Analytics

Learn to use Forward Selection Techniques for Ensemble Modeling

, / 5


Ensemble methods have the ability to provide much needed robustness and accuracy to both supervised and unsupervised problems. Machine learning is going to evolve more and more and computations power becomes cheap and the volume of data continues to increase. In such a scenario, there is a limit to the improvement you can achieve by using a single framework and attempting to improve its predictive power (using modification in variables).

Ensemble Modeling follows the philosophy of ‘Unity in Strength’ i.e. combination of diversified base models strengthens weak models. The success of ensemble techniques spreads across multiple disciplines like recommendation systems, anomaly detection, stream mining, and web applications where the need for combination of competing models is ubiquitous.

If you wish to experience the powerful nature of ensemble, try using supervised and unsupervised model for a single task and merge their results. You’ll notice that the merger delivered better performance.

Last week, we talked about a simple method to ensemble multiple learners through neural networks. We created a black box, which took in all learners and gave us a final ensemble predictions. In this article, I will take an alternate route(using R) to solve the same problem, with a higher control in the ensemble process. I have leveraged technique discussed in one of the Cornell’s paper “Ensemble Selection from Libraries of Models”. The underlying principle remains the same :

Ensemble of diverse and high performance models are better than individual models.

Forward Selection Techniques for Ensemble Modeling


Principles involved in the Process

Forward Selection of learners : Imagine a scenario where we have 1000 learner outputs. We start with an empty bag and then in every iteration we add a new learner which benefits the bag on performance metric.

Selection with Replacement : To select a new addition for the bag, we put our hand in the stack of 1000 leaner and pull out the best of the lot. Even if a learner is found to be a good fit, we’ll still use this learner in the next iteration stack.

Bagging of Ensemble Models : Ensemble learners are prone to over-fitting. To avoid this, we take a sample to try ensembling. Once we are done, we again use another sample. Finally, we bag all these models together using simple average of predictions or maximum votes.


Understanding the R code

The R code to ensemble multiple learners is not very easy to follow. Hence, I have added steps(explanation) at every line of code for ease of understanding:

Step 1 : Load the train and test files

train <- read.csv("train_combined.csv")
test <- read.csv("test_combined.csv")

Step 2 : Specify basic metrics like number of bags/iterations, number of learners/models

num_models <- 24
itertions <- 1000

Step 3 : Load the library needed for the performance metric (optional)


Step 4 : Calculating individual performance of models for establishing benchmarks

rmsle_mat <- matrix(0,num_models,2)
rmsle_mat[,2] <- 1:num_models
for(i in 1:num_models){
rmsle_mat[i,1] <- rmsle(train[,i],train[,num_models+1])
best_model_no <- rmsle_mat[rmsle_mat[,1] == min(rmsle_mat[,1]),2]

Step 5 : Using all the metrics specified apply forward selection with replacement in 1000 bags

x <- matrix(0,1000,itertions)
prediction_test <- matrix(0,nrow(test),1)
prediction_train <- matrix(0,nrow(train),1)
for (j in 1:itertions){
rmsle_in <- 1
rmsle_new <- matrix(0,num_models,2)
rmsle_new[,2] <- 1:num_models
t = 1
train1 <- train[sample(1:nrow(train), 10000,replace=FALSE),]
for(i in 1:num_models){
rmsle_mat[i,1] <- rmsle(train1[,i],train1[,num_models+1])
best_model_no <- rmsle_mat[rmsle_mat[,1] == min(rmsle_mat[,1]),2]
prediction <- train1[,best_model_no]
prediction_1 <- test[,best_model_no]
prediction_2 <- train[,best_model_no]
x[t,j] <- best_model_no
while(-1 < 0) {
t <- t + 1
prediction1 <- prediction
for (i in 1:num_models){
prediction1 <- ((t*prediction) + train1[,i])/(t+1)
rmsle_new[i,1] <- rmsle(prediction1,train1[,num_models+1])
rmsle_min <- min(rmsle_new[,1])
model_no <- rmsle_new[rmsle_new[,1] == min(rmsle_new[,1]),2]
if(rmsle_in < rmsle_min) {break} else {
rmsle_in <- rmsle_min
prediction <- (((t-1)*prediction) + train1[,model_no])/t
prediction_1 <- (((t-1)*prediction_1) + test[,model_no])/t
prediction_2 <- (((t-1)*prediction_2) + train[,model_no])/t
x[t,j] <- model_no
prediction_test <- cbind(prediction_test,prediction_1)
prediction_train <- cbind(prediction_train,prediction_2)



End Notes

Even though bagging tackles majority of over-fitting cases, still it is good to be cautious about over-fitting in ensemble learners. A possible solution is set aside one set of population untouched and try performance metrics using this untouched test population. The two methods mentioned are no way exhaustive list of possible ensemble techniques. Ensemble is more of an art than science. Most of the master kagglers are masters of this art.

Did you find this article useful? Have you tried anything else to find optimal weights? I’ll be happy to hear from you in the comments section below.

If you like what you just read & want to continue your analytics learning, subscribe to our emailsfollow us on twitter or like our facebook page.


  • snehil mishra says:

    Hi kunal sir,
    i’m a diploma holder in electronics engineering and also have 2.5+years work experience in mobile service industry. I have completed my BCA this year in distance mode.
    i’m working professional right now
    i want to be a business analytics in multiple domain.
    I want to persue MBA,but i’m confused that should i go to regular mode or from distance mode and also the specilization.
    snehil mishra

    • Tavish says:

      Hi Snehil,
      Please put such generic posts on the forum. That ways you will also get more opinions on your questions for industry leaders. Also that will help us keep this arena of article zone, specific enough.

      Thanks for following us,

  • sel says:

    thanks for the post
    can you give us the input dataset

    • Tavish says:

      Hi sel,
      The post is not specific to any particular dataset. In case you need to experiment this code, you can download a few datasets from Kaggle.


  • Monit Gehloy says:

    I don’t understand the step 4? What is its need? What models are being talked about when we have not created any model as of yet? And what is it doing with columns of the train set to determine rmsle?

Leave A Reply

Your email address will not be published.

Join world’s fastest growing Analytics Community
Receive awesome tips, guides, infographics and become expert at: