Random Forest vs Decision Tree | Which Is Right for You?

Abhishek Sharma 16 Feb, 2024 • 10 min read

Introduction

There are many different ways in which machine learning models make decisions. Decision Trees and Random Forests are two of the most common decision-making processes used in ML. Hence, there is always confusion, comparison, and debate about Random Forest vs Decision Tree. They both have their advantages, disadvantages, and specific use case, based on which we can choose the right one specific to our requirement and project. This article will give you all the information required to make this choice.

Learning Objectives

  • In this tutorial, there is a brief Introduction to Decision Trees and an overview of random forests.
  • Clash of Random Forest and Decision Tree (in Code!) and why did random forest outperform decision tree and when to choose which algorithm.
random forest vs decision tree

Random Forest vs Decision Tree Explained by Analogy

Let’s start with a thought experiment that will illustrate the difference between a decision tree and a random forest model.

Suppose a bank has to approve a small loan amount for a customer, and the bank needs to make a decision quickly. The bank checks the person’s credit history and financial condition and finds that they haven’t re-paid the older loan yet. Hence, the bank rejects the application. But here’s the catch – the loan amount was very small for the bank’s immense coffers, and they could have easily approved it in a very low-risk move. Therefore, the bank lost the chance of making some money.

Now, another loan application comes in a few days down the line, but this time the bank comes up with a different strategy – multiple decision-making processes. Sometimes it checks for credit history first, and sometimes it checks for the customer’s financial condition and loan amount first. Then, the bank combines the results from these multiple decision-making processes and decides to give the loan to the customer.

Even if this process took more time than the previous one, the bank profited using this method. This is a classic example where collective decision-making outperformed a single decision-making process. Now, here’s my question to you – do you know what these two processes represent?

Machine Learning is the sub-branch of Artificial Intelligence. These are decision trees and a random forest! We’ll explore this idea in detail here, dive into the major differences between these two methods, and answer the key question – which machine learning algorithm should you go with?

Overview of Random Forest vs Decision Tree

AspectRandom ForestDecision Tree
NatureEnsemble of multiple decision treesSingle decision tree
Bias-Variance Trade-offLower variance, reduced overfittingHigher variance, prone to overfitting
Predictive AccuracyGenerally higher due to ensembleProne to overfitting, may vary
RobustnessMore robust to outliers and noiseSensitive to outliers and noise
Training TimeSlower due to multiple tree constructionFaster as it builds a single tree
InterpretabilityLess interpretable due to ensembleMore interpretable as a single tree
Feature ImportanceProvides feature importance scoresProvides feature importance, but less reliable
UsageSuitable for complex tasks, high-dimensional dataSimple tasks, easy interpretation

What Are Decision Trees?

A decision tree is a supervised machine-learning algorithm that can be used for both classification and regression problems. Algorithm builds its model in the structure of a tree along with decision nodes and leaf nodes. A decision tree is simply a series of sequential decisions made to reach a specific result. Here’s an illustration of a decision tree in action (using our above example):

random forest vs Decision tree

Let’s understand how this tree works

First, it checks if the customer has a good credit history. Based on that, it classifies the customer into two groups, i.e., customers with good credit history and customers with bad credit history. Then, it checks the income of the customer and again classifies him/her into two groups. Finally, it checks the loan amount requested by the customer. Based on the outcomes from checking these three features, the decision tree decides if the customer’s loan should be approved or not.

The features/attributes and conditions can change based on the data and complexity of the problem, but the overall idea remains the same. So, a decision tree makes a series of decisions based on a set of features/attributes present in the data, which in this case were credit history, income, and loan amount.

Now, you might be wondering:

Why did the decision tree check the credit score first and not the income?

This is known as feature importance, and the sequence of attributes to be checked is decided on the basis of criteria like the Gini Impurity Index or Information Gain. The explanation of these concepts is outside the scope of our article here, but you can refer to either of the below resources to learn all about decision trees:

Note: The idea behind this article is to compare decision trees and random forests. Therefore, I will not go into the details of the basic concepts, but I will provide the relevant links in case you wish to explore them further.

What Is Random Forest?

The decision tree algorithm is quite easy to understand and interpret. But data scientist uses random forest mostly, a single tree is not sufficient for producing effective results. This is where the Random Forest algorithm comes into the picture.

random forest

Random Forest is a tree-based machine learning algorithm that leverages the power of multiple decision trees for making decisions. As the name suggests, it is a “forest” of trees!

But why do we call it a “random” forest? That’s because it is a forest of randomly created decision trees. Each node in the decision tree works on a random subset of features to calculate the output. The random forest then combines the output of individual decision trees to generate the final output. Bootstrapping is the process of randomly selecting items from the training dataset. This is a haphazard technique. It assembles randomized decisions based on several decisions and makes the final decision based on the majority voting.

In simple words:

The Random Forest Algorithm combines the output of multiple (randomly created) Decision Trees to generate the final output.

random forest ensemble

This process of combining the output of multiple individual models (also known as weak learners) is called Ensemble Learning. If you want to read more about how the random forest and other ensemble learning algorithms work, check out the following articles:

Now the question is, how can we decide which algorithm to choose between a decision tree and a random forest? Let’s see them both in action before we make any conclusions!

Random Forest vs. Decision Tree in Python

In this section, we will be using Python to solve a binary classification problem using both a decision tree as well as a random forest. We will then compare their results and see which one suited our problem the best.

We’ll be working on the Loan Prediction dataset from Analytics Vidhya’s DataHack platform. This is a binary classification problem where we have to determine if a person should be given a loan or not based on a certain set of features.

Note: You can go to the DataHack platform and compete with other people in various online machine-learning competitions and stand a chance to win exciting prizes.

Ready to code?

Step 1: Loading the Libraries and Dataset

Let’s start by importing the required Python libraries and our dataset:

loan prediction dataset | random forest vs decision tree

The dataset consists of 614 rows and 13 features, including credit history, marital status, loan amount, and gender. Here, the target variable is Loan_Status, which indicates whether a person should be given a loan or not.

Step 2: Data Preprocessing

Now comes the most crucial part of any data science project – data preprocessing and feature engineering. In this section, I will deal with the categorical variables in the data and imputing the missing values.

I will impute the missing values in the categorical variables with the mode and the continuous variables with the mean (for the respective columns). Also, we will label encoding the categorical values in the data. You can read this article to learn more about Label Encoding.

Python Code:

cleaned dataset | random forest vs decision tree

Step 3: Creating Train and Test Sets

Now, let’s split the dataset in an 80:20 ratio for training and test set, respectively:

Let’s take a look at the shape of the created train and test sets:

shape of train and test

Great! Now we are ready for the next stage, where we’ll build the decision tree and random forest models!

Step 4: Building and Evaluating the Model

Since we have both the training and testing sets, it’s time to train our models and classify the loan applications. First, we will train a decision tree on this dataset:

Next, we will evaluate this model using F1-Score. F1-Score is the harmonic mean of precision and recall given by the formula:

f1 score | random forest vs decision tree

You can learn more about this and various other evaluation metrics here:

Let’s evaluate the performance of our model using the F1 score:

f1 score decision tree
f1 score decision tree

Here, you can see that the decision tree performs well on in-sample evaluation, but its performance decreases drastically on out-of-sample evaluation. Why do you think that’s the case? Unfortunately, our decision tree model is overfitting on the training data. Will random forest solve this issue?

Building a Random Forest Model

Let’s see a random forest model in action:

f1 score random forest
f1 score random forest

Here, we can clearly see that the random forest model performed much better than the decision tree in the out-of-sample evaluation. Let’s discuss the reasons behind this in the next section.

Why Did Our Random Forest Model Outperform the Decision Tree?

Random forest leverages the power of multiple decision trees. It does not rely on the feature importance given by a single decision tree. Let’s take a look at the feature importance given by different algorithms to different features:

feature importance comparison | random forest vs decision tree

As you can clearly see in the above graph, the decision tree model gives high importance to a particular set of features. But the random forest chooses features randomly during the training process. Therefore, it does not depend highly on any specific set of features. This is a special characteristic of random forests over bagging trees. You can read more about the bagging trees classifier here.

Therefore, the random forest can generalize the data in a better way. This randomized feature selection makes a random forest much more accurate than a decision tree.

How to Choose Between Decision Tree & Random Forest?

So what is the final verdict in the Random Forest vs Decision Tree debate? How do we decide which one is better and which one to choose?

Random Forest is suitable for situations when we have a large dataset, and interpretability is not a major concern.

Decision trees are much easier to interpret and understand. We take multiple decision trees in a random forest and then aggregate the result. Since a random forest combines multiple decision trees, it becomes more difficult to interpret. Here’s the good news – it’s not impossible to interpret a random forest. Here is an article that talks about interpreting results from a random forest model:

Also, Random Forest has a higher training time than a single decision tree. You should take this into consideration because as we increase the number of trees in a random forest, the time taken to train each of them also increases. That can often be crucial when you’re working with a tight deadline in a machine learning project.

But I will say this – despite instability and dependency on a particular set of features, decision trees are really helpful because they are easier to interpret and faster to train. Anyone with very little knowledge of data science/data analytics can also use decision trees to make quick data-driven decisions.

Conclusion

Hope by now you’ve figured out how to pick a side in the random forest vs decision tree debate. A decision tree is a choice collection, while a random forest is a collection of decision trees. It can get tricky when you’re new to machine learning, but this article has hopefully clarified to you the differences and similarities. Note that the random forest is a predictive modeling tool and not a descriptive one. The random forest has complex visualization and accurate predictions, but the decision tree has simple visualization and less accurate predictions. The advantages of Random Forest are that it prevents overfitting and is more accurate in predictions.

Key Takeaways

  • A decision tree is more simple and interpretable but prone to overfitting, but a random forest is complex and prevents the risk of overfitting.
  • Random forest is a more robust and generalized performance on new data, widely used in various domains such as finance, healthcare, and deep learning.

Frequently Asked Questions

Q1. Which algorithm is better: decision tree or random forest?

A. Random forest is a strong modeling technique and much more robust than a decision tree. Many Decision trees are aggregated to limit overfitting as well as errors due to bias and achieve the final result.

Q2. How do you choose between a decision tree and a random forest?

A. Decision tree is a combination of decisions, and a random forest is a combination of many decision trees. Random forest is slow, but decision tree is fast and easy on large data, especially on regression tasks.

Q3. What is a decision tree?

A. It is a supervised learning algorithm that is utilized for both classification and regression tasks. It consists of a hierarchical, tree structure, which consists of a root node, branches, internal nodes, and leaf nodes.

Q4. Is random forest more accurate than decision tree?

A. Random forests are generally more accurate than individual decision trees because they combine multiple trees and reduce overfitting, providing better predictive performance and robustness.

Abhishek Sharma 16 Feb 2024

He is a data science aficionado, who loves diving into data and generating insights from it. He is always ready for making machines to learn through code and writing technical blogs. His areas of interest include Machine Learning and Natural Language Processing still open for something new and exciting.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Ogwoka Thaddeus
Ogwoka Thaddeus 13 May, 2020

Hello Sir, This is so helpful especially to me as an interested researcher in ML and DA. Kindly more of these will be of help.

Srujana Soppa
Srujana Soppa 14 May, 2020

Nice article...easily understandable.Highly recommended for beginners n non programmers

Shahnawaz Sayyed
Shahnawaz Sayyed 14 May, 2020

Good Explaination...Thanks

Cris Faria
Cris Faria 15 May, 2020

You are a natural teacher. Very well explained.

Chittaranjan Gouda
Chittaranjan Gouda 16 May, 2020

Nice article. The examples you have given that is very helpful for beginners. Pls post e Ensemble learning, PCA,Pipeline and Naive byes with example on every scenario like you have given on this blog. However one more thing I didn't understand how you have assigned the value to property_area as 1,2,3 and 4 with your choice. How did you know the these values. Why not one hot encoding is not performed here. Pls explain if you have time permits.

Chitaranjan Gouda
Chitaranjan Gouda 16 May, 2020

Nice article. The examples you have given that is very helpful for beginners. please post Ensemble learning, PCA, Pipeline, and Naive Byes with examples on every scenario as you have given on this blog. However, one thing I didn't understand how you assigned the values to Property_area as 1,2,3 with your choice. How did you know the values? Why not One hot encoding is not performed here. Please explain if time permits.

Rambabu Nookala
Rambabu Nookala 17 May, 2020

Mr. Abhishek Sharma, Indeed technicalities explained suitably. Interpretation of Loan approval criteria can only reported with Random forest Vs. Decision tree terminology. Provability gained for further study of sources mentioned.... Can be more clear with reference to initiated example in its (Conclusion) for further study inclination.gain for further study is only found in conclusion. Say the Loan is to be disapproved in another scenario, can we be able to explain the Customer only the Random forest Vs. Decision tree technique implications? am sorry mentioned feed back has any weaknesses, ignorantly.

Selwin S
Selwin S 15 Jun, 2020

Hi abhishek, i'm unable to get the dataset. Where can i find it?

Yusra Khalid
Yusra Khalid 24 Jul, 2020

You guys have made the life of a data science newbie so much easier. Kudos

Riyazahmed Jamadar
Riyazahmed Jamadar 08 Oct, 2020

Nicely illustrated the difference. Thanks for your efforts to make it comprehensive and concise tutorial.

Ranchana
Ranchana 27 Oct, 2022

Thank you for your articles. Very well explained. Moreover, I've known about how to interpret a model which is very valuable.

Machine Learning
Become a full stack data scientist