Tuning the parameters of your Random Forest model

Tavish Srivastava 22 Aug, 2023 • 6 min read

Why to tune Machine Learning Algorithms?

A month back, I participated in a Kaggle competition called TFI. I started with my first submission at 50th percentile. Having worked relentlessly on feature engineering for more than 2 weeks, I managed to reach 20th percentile. To my surprise, right after tuning the parameters of the machine learning algorithm I was using, I was able to breach top 10th percentile.

This is how important tuning these machine learning algorithms are. Random Forest is one of the easiest machine learning tool used in the industry. In our previous articles, we have introduced you to Random Forest and compared it against a CART model. Machine Learning tools are known for their performance.

Tuning the parameters of your Random Forest model

What is a Random Forest?

Random forest is an ensemble tool which takes a subset of observations and a subset of variables to build a decision trees. It builds multiple such decision tree and amalgamate them together to get a more accurate and stable prediction. This is direct consequence of the fact that by maximum voting from a panel of independent judges, we get the final prediction better than the best judge.

random forest
random forest

We generally see a random forest as a black box which takes in input and gives out predictions, without worrying too much about what calculations are going on the back end. This black box itself have a few levers we can play with. Each of these levers have some effect on either the performance of the model or the resource – time balance. In this article we will talk more about these levers we can tune, while building a random forest model.

What are Parameters in Random Forests to tune a RF Model?

Parameters in random forest are either to increase the predictive power of the model or to make it easier to train the model. Following are the parameters we will be talking about in more details (Note that I am using Python conventional nomenclatures for these parameters) :

knobs

Features which make predictions of the model better

RF

There are primarily 3 features which can be tuned to improve the predictive power of the model :

1.a. max_features:

These are the maximum number of features Random Forest is allowed to try in individual tree. There are multiple options available in Python to assign maximum features. Here are a few of them :

  1. Auto/None : This will simply take all the features which make sense in every tree.Here we simply do not put any restrictions on the individual tree.
  2. sqrt : This option will take square root of the total number of features in individual run. For instance, if the total number of variables are 100, we can only take 10 of them in individual tree.”log2″ is another similar type of option for max_features.
  3. 0.2 : This option allows the random forest to take 20% of variables in individual run. We can assign and value in a format “0.x” where we want x% of features to be considered.

How does “max_features” impact performance and speed?

Increasing max_features generally improves the performance of the model as at each node now we have a higher number of options to be considered. However, this is not necessarily true as this decreases the diversity of individual tree which is the USP of random forest. But, for sure, you decrease the speed of algorithm by increasing the max_features. Hence, you need to strike the right balance and choose the optimal max_features.

1.b. n_estimators :

This is the number of trees you want to build before taking the maximum voting or averages of predictions. Higher number of trees give you better performance but makes your code slower. You should choose as high value as your processor can handle because this makes your predictions stronger and more stable.

1.c. min_sample_leaf :

If you have built a decision tree before, you can appreciate the importance of minimum sample leaf size. Leaf is the end node of a decision tree. A smaller leaf makes the model more prone to capturing noise in train data. Generally I prefer a minimum leaf size of more than 50. However, you should try multiple leaf sizes to find the most optimum for your use case.

Features which will make the model training easier

There are a few attributes which have a direct impact on model training speed. Following are the key parameters which you can tune for model speed :

2.a. n_jobs :

This parameter tells the engine how many processors is it allowed to use. A value of “-1” means there is no restriction whereas a value of “1” means it can only use one processor. Here is a simple experiment you can do with Python to check this metric :

%timeit
model = RandomForestRegressor(n_estimator = 100, oob_score = TRUE,n_jobs = 1,random_state =1)
model.fit(X,y)

Output  ———-  1 loop best of 3 : 1.7 sec per loop

%timeit
model = RandomForestRegressor(n_estimator = 100,oob_score = TRUE,n_jobs = -1,random_state =1)
model.fit(X,y)

Output  ———-  1 loop best of 3 : 1.1 sec per loop

“%timeit” is an awsum function which runs a function multiple times and gives the fastest loop run time. This comes out very handy while scalling up a particular function from prototype to final dataset.

2.b. random_state :

This parameter makes a solution easy to replicate. A definite value of random_state will always produce same results if given with same parameters and training data. I have personally found an ensemble with multiple models of different random states and all optimum parameters sometime performs better than individual random state.

2.c. oob_score :

This is a random forest cross validation method. It is very similar to leave one out validation technique, however, this is so much faster. This method simply tags every observation used in different tress. And then it finds out a maximum vote score for every observation based on only trees which did not use this particular observation to train itself.

Here is a single example of using all random forest parameters in a single function :

model = RandomForestRegressor(n_estimator = 100, oob_score = TRUE, n_jobs = -1,random_state =50,                                         max_features = "auto", min_samples_leaf = 50)
model.fit(X,y)

Learning through a case study

We have referred to Titanic case study in many of our previous articles. Let’s try the same problem again. The objective of this case here will be to get a feel of random forest parameters tuning and not getting the right features. Try following code to build a basic model :

Frequently Asked Questions

Q1. How many parameters are there in random forest?

A. A random forest is an ensemble learning method that combines multiple decision trees. The number of parameters in a random forest depends on the number of trees in the forest and the complexity of each individual tree. Generally, a random forest can have thousands to millions of parameters, with each tree having its own set of parameters based on the number of features and the depth of the tree.

Q2. What parameters should I tune for random forest?

A. When tuning a random forest, key parameters to consider are the number of trees in the forest, the maximum depth of each tree, the number of features considered for splitting at each node, and the criterion used to evaluate the quality of a split (e.g., Gini impurity or entropy). Additionally, parameters related to data sampling, such as the subsample size and random state, can be adjusted for improved performance.

End Notes

Machine learning tools like random forest, SVM, neural networks etc. are all used for high performance. They do give high performance, but users generally don’t understand how they actually work. Not knowing the statistical details of the model is not a concern however not knowing how the model can be tuned well to clone the training data restricts the user to use the algorithm to its full potential. In some of the future articles we will take up tuning of other machine learning algorithm like SVM , GBM and neaural networks.

Have you used random forest before? What parameters did you tune? How did tuning the algorithm impact the performance of the model? Did you see any significant benefits by doing the same? Do let us know your thoughts about this guide in the comments section below.

If you like what you just read & want to continue your analytics learning, subscribe to our emailsfollow us on twitter or like our facebook page.

Tavish Srivastava 22 Aug 2023

Tavish Srivastava, co-founder and Chief Strategy Officer of Analytics Vidhya, is an IIT Madras graduate and a passionate data-science professional with 8+ years of diverse experience in markets including the US, India and Singapore, domains including Digital Acquisitions, Customer Servicing and Customer Management, and industry including Retail Banking, Credit Cards and Insurance. He is fascinated by the idea of artificial intelligence inspired by human intelligence and enjoys every discussion, theory or even movie related to this idea.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Aayush Agrawal
Aayush Agrawal 10 Jun, 2015

Brilliantly written article. Currently I have used all of these techniques in a Data science problem I was working on and it definitely helps in improving model performance and accuracy. Recently,I came across something else also when I was reading some articles on Random Forest, i.e a Regularization of Random Forest. The theme was to only split data with some variables if the splitting is significant enough using Statistical validation, now this is something which can help in taking Random Forest to next level, as It can help in reducing over-fitting. I tried to use it using R caret package but I think this technique is computationally expensive so couldn't run it over my system. I would love to see an article on it to understand it's working and how its performance can be improved.

KARTHI V
KARTHI V 10 Jun, 2015

Hi Tavish, Very useful article.

Ravi
Ravi 11 Sep, 2015

I love AV and am a fan of your articles. I have heard something like Conditional Inference Trees which are similar to Random Forests. Can you share your thoughts on Conditional Inference Trees also? How does it work & its tuning parameters, when does it outcast Random Forests?

Josh
Josh 17 Sep, 2015

Great article! I would love to see something similar regarding parameter tuning for the XGBoost package.

John S
John S 12 Oct, 2016

This was a very nice article. I would still be interested to know if there is a minimum number of trees that can be calculated to reduce computational cost?

Rich
Rich 16 Oct, 2016

Perfect! This is exactly what I was looking for. Thanks for sharing.

Shailesh
Shailesh 04 Jul, 2017

Thanks for a nice article. Random Forest takes a subset of observations from the original sample. In Python, how to find how many observations were selected by a tree? Also, is there any way to specify the % of observations to be kept in the sample?

SRIRAM SETHURAMAN
SRIRAM SETHURAMAN 20 Jul, 2017

I have a question. I am working on a problem to predict 'Y' variable with three independent variable. I am using Forest algorithm(Regression) and if I transform the values of X and Y to LOG, there is an improvement in the prediction. Having said that, finally, I need to transform them back into original values anyway. In this case, if I find the difference between Actual Y and Predicted Y, they are huge. Is it right thing to do transform Y into LOG before I train them in the algorithm ?. The X and Y are amount fields and in our domain, they are pretty huge.

Kanav
Kanav 29 Jul, 2017

HI Tavish, Great article! when i run clf.oob_prediction it shows clf has no such attribute. and when i check the sklearn website. There was also no such attribute, Is there any alternative?

Amy
Amy 07 Dec, 2017

Hello, Thank you for this interesting article. Could you please tell us more about the other parameters : max_leaf_nodes, min_impurity_split, etc. ? Regards,

vikasgupta.net@live.com
[email protected] 26 Dec, 2017

Please add the some information about tuneRF in R and how it helps to tune the parameter?

Vivek Purkayastha
Vivek Purkayastha 26 Dec, 2017

Hi Tavish... It is a nice article...but there seems to be a slight mistake...I assume by python you really mean scikit learn...in max features auto and none are not the same thing...auto actually is sqrt and also is the default option..where as none actually means include all the features.... according to scikit learn documentation... correct me if I am wrong..... Thank

Arun Singh
Arun Singh 20 Mar, 2018

I usually get confused about this topic. Very well explained. thanks a lot.

Machine Learning
Become a full stack data scientist