We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details

Guide on AdaBoost Algorithm

Anshul 03 Oct, 2024
10 min read

Introduction

AdaBoost algorithm, introduced by Freund and Schapire in 1997, revolutionized ensemble modeling. Since its inception, AdaBoost has become a widely adopted technique for addressing binary classification challenges. This powerful algorithm enhances prediction accuracy by transforming a multitude of weak learners into robust, strong learners

The principle behind ada boosting algorithms is that we first build a model on the training dataset and then build a second model to rectify the errors present in the first model. This procedure is continued until and unless the errors are minimized and the dataset is predicted correctly. Ada Boosting algorithms work in a similar way, it combines multiple models (weak learners) to reach the final output (strong learners). In this article, you will get to learn about the Adaboost algorithm in machine learning, also on various topics adaptive boosting and adaboost classifier. So in this tutorial you will get full understanding on adaboost algorithm.

In this article, you will understand what AdaBoost is, how AdaBosting works, the AdaBoost algorithm in machine learning, and the AdaBoost classifier. AdaBoost, short for Adaptive Boosting, is an ensemble learning technique that combines multiple weak learners to create a strong classifier, improving the accuracy of machine learning models.

Learning Objectives

  • To understand what the AdaBoost algorithm is and how it works.
  • To understand what stumps are.
  • To find out how boosting algorithms help increase the accuracy of ML models.

This article was published as a part of the Data Science Blogathon

What Is the AdaBoost Algorithm?

There are many machine learning algorithms to choose from for your problem statements. One of these algorithms for predictive modeling is called AdaBoost.

AdaBoost algorithm, short for Adaptive Boosting, is a Boosting technique used as an Ensemble Method in Machine Learning. It is called Adaptive Boosting as the weights are re-assigned to each instance, with higher weights assigned to incorrectly classified instances.

stump | AdaBoost Algorithm

What this algorithm does is that it builds a model and gives equal weights to all the data points. It then assigns higher weights to points that are wrongly classified. Now all the points with higher weights are given more importance in the next model. It will keep training models until and unless a lower error is received.

ensemble model | AdaBoost Algorithm

Let’s take an example to understand this, suppose you built a decision tree algorithm on the Titanic dataset, and from there, you get an accuracy of 80%. After this, you apply a different algorithm and check the accuracy, and it comes out to be 75% for KNN and 70% for Linear Regression.

When building different models on the same dataset, we observe variations in accuracy. However, leveraging the power of AdaBoost classifier, we can combine these algorithms to enhance the final predictions. By averaging the results from diverse models, Adaboost allows us to achieve higher accuracy and bolster predictive capabilities effectively.

If you want to understand this visually, I strongly recommend you go through this article.

Here we will be more focused on mathematics intuition.

There is another ensemble learning algorithm called the gradient ada boosting algorithm. In this algorithm, we try to reduce the error instead of wights, as in AdaBoost. But in this article, we will only be focussing on the mathematical intuition of Adaptive Boosting.

Understanding the Working of the AdaBoost Algorithm

Let’s understand what and how this algorithm works under the hood with the following tutorial.

Step 1: Assigning Weights

The Image shown below is the actual representation of our dataset. Since the target column is binary, it is a classification problem. First of all, these data points will be assigned some weights. Initially, all the weights will be equal.

data | AdaBoost Algorithm

The formula to calculate the sample weights is:

formula

Where N is the total number of data points

Here since we have 5 data points, the sample weights assigned will be 1/5.

Step 2: Classify the Samples

We start by seeing how well “Gender” classifies the samples and will see how the variables (Age, Income) classify the samples.

We’ll create a decision stump for each of the features and then calculate the Gini Index of each tree. The tree with the lowest Gini Index will be our first stump.

Here in our dataset, let’s say Gender has the lowest gini index, so it will be our first stump.

Step 3: Calculate the Influence

We’ll now calculate the “Amount of Say” or “Importance” or “Influence” for this classifier in classifying the data points using this formula:

error | AdaBoost Algorithm

The total error is nothing but the summation of all the sample weights of misclassified data points.

Here in our dataset, let’s assume there is 1 wrong output, so our total error will be 1/5, and the alpha (performance of the stump) will be:

performance of stumps

Note: Total error will always be between 0 and 1.

0 Indicates perfect stump, and 1 indicates horrible stump.

error rate ,adaboost algorithm

From the graph above, we can see that when there is no misclassification, then we have no error (Total Error = 0), so the “amount of say (alpha)” will be a large number.

When the classifier predicts half right and half wrong, then the Total Error = 0.5, and the importance (amount of say) of the classifier will be 0.

If all the samples have been incorrectly classified, then the error will be very high (approx. to 1), and hence our alpha value will be a negative integer.

Step 4: Calculate TE and Performance

You might be wondering about the significance of calculating the Total Error (TE) and performance of an Adaboost stump. The reason is straightforward – updating the weights is crucial. If identical weights are maintained for the subsequent model, the output will mirror what was obtained in the initial model.

The wrong predictions will be given more weight, whereas the correct predictions weights will be decreased. Now when we build our next model after updating the weights, more preference will be given to the points with higher weights.

After finding the importance of the classifier and total error, we need to finally update the weights, and for this, we use the following formula:

net weight sample

The amount of, say (alpha) will be negative when the sample is correctly classified.

The amount of, say (alpha) will be positive when the sample is miss-classified.

There are four correctly classified samples and 1 wrong. Here, the sample weight of that datapoint is 1/5, and the amount of say/performance of the stump of Gender is 0.69.

New weights for correctly classified samples are:

correctly classified | AdaBoost Algorithm

For wrongly classified samples, the updated weights will be:

wrongly classified

Note

See the sign of alpha when I am putting the values, the alpha is negative when the data point is correctly classified, and this decreases the sample weight from 0.2 to 0.1004. It is positive when there is misclassification, and this will increase the sample weight from 0.2 to 0.3988

0.2 ro 0.3988

We know that the total sum of the sample weights must be equal to 1, but here if we sum up all the new sample weights, we will get 0.8004. To bring this sum equal to 1, we will normalize these weights by dividing all the weights by the total sum of updated weights, which is 0.8004. So, after normalizing the sample weights, we get this dataset, and now the sum is equal to 1.

TE Performance,adaboost algorithm

Step 5: Decrease Errors

Now, we need to make a new dataset to see if the errors decreased or not. For this, we will remove the “sample weights” and “new sample weights” columns and then, based on the “new sample weights,” divide our data points into buckets.

data ,decrease errors

Step 6: New Dataset

We are almost done. Now, what the algorithm does is selects random numbers from 0-1. Since incorrectly classified records have higher sample weights, the probability of selecting those records is very high.

Suppose the 5 random numbers our algorithm take is 0.38,0.26,0.98,0.40,0.55.

Now we will see where these random numbers fall in the bucket, and according to it, we’ll make our new dataset shown below.

output | AdaBoost Algorithm

This comes out to be our new dataset, and we see the data point, which was wrongly classified, has been selected 3 times because it has a higher weight.

Step 7: Repeat Previous Steps

Now this act as our new dataset, and we need to repeat all the above steps i.e.

  •  Assign equal weights to all the data points.
  • Find the stump that does the best job classifying the new collection of samples by finding their Gini Index and selecting the one with the lowest Gini index.
  • Calculate the “Amount of Say” and “Total error” to update the previous sample weights.
  •  Normalize the new sample weights.

Iterate through these steps until and unless a low training error is achieved.

Suppose, with respect to our dataset, we have constructed 3 decision trees (DT1, DT2, DT3) in a sequential manner. If we send our test data now, it will pass through all the decision trees, and finally, we will see which class has the majority, and based on that, we will do predictions
for our test dataset.

Python implementation of AdaBoost 

To implement the AdaBoost algorithm in Python, you can either build it from scratch or use libraries like Scikit-learn.

Building AdaBoost from Scratch

Here’s a simple implementation of the AdaBoost algorithm using only NumPy:

python
import numpy as np

class DecisionStump:
    def __init__(self):
        self.polarity = 1
        self.feature_idx = None
        self.threshold = None
        self.alpha = None

    def predict(self, X):
        n_samples = X.shape[0]
        predictions = np.ones(n_samples)
        feature_column = X[:, self.feature_idx]

        if self.polarity == 1:
            predictions[feature_column < self.threshold] = -1
        else:
            predictions[feature_column > self.threshold] = -1

        return predictions

class AdaBoost:
    def __init__(self, n_clf=5):
        self.n_clf = n_clf
        self.clfs = []

    def fit(self, X, y):
        n_samples, n_features = X.shape
        w = np.full(n_samples, (1 / n_samples))

        for _ in range(self.n_clf):
            clf = DecisionStump()
            min_error = float('inf')

            for feature_i in range(n_features):
                X_column = X[:, feature_i]
                thresholds = np.unique(X_column)

                for threshold in thresholds:
                    predictions = np.ones(n_samples)
                    predictions[X_column < threshold] = -1

                    error = sum(w[y != predictions])

                    if error > 0.5:
                        error = 1 - error
                        p = -1
                    else:
                        p = 1

                    if error < min_error:
                        clf.polarity = p
                        clf.threshold = threshold
                        clf.feature_idx = feature_i
                        min_error = error

            EPS = 1e-10
            clf.alpha = 0.5 * np.log((1.0 - min_error + EPS) / (min_error + EPS))
            predictions = clf.predict(X)
            w *= np.exp(-clf.alpha * y * predictions)
            w /= np.sum(w)
            self.clfs.append(clf)

    def predict(self, X):
        clf_preds = [clf.alpha * clf.predict(X) for clf in self.clfs]
        y_pred = np.sum(clf_preds, axis=0)
        return np.sign(y_pred)

Using Scikit-learn

If you prefer a more straightforward approach, you can use the Scikit-learn library, which has a built-in AdaBoost classifier. Here’s how to do it:

from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Load dataset
data = pd.read_csv("Iris.csv")  # Adjust the file path as necessary
X = data.iloc[:, :-1].values  # Features
y = data.iloc[:, -1].values  # Target

# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create the AdaBoost classifier
abc = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1), n_estimators=50)

# Fit the model
abc.fit(X_train, y_train)

# Predict and evaluate
y_pred = abc.predict(X_test)
print("Accuracy:", accuracy_score(y_test, y_pred))

Conclusion

You have finally mastered this algorithm if you understand each and every line of this article.

We started by introducing you to what Boosting is and what are its various types to make sure that you understand the Adaboost classifier and where AdaBoost algorithm falls exactly. We then applied straightforward math and saw how every part of the formula worked.

Hope you like the article! The AdaBoost algorithm, also known as the Ada boosting algorithm, enhances the performance of weak classifiers by combining their predictions. This powerful technique, often referred to as Ada boost, improves accuracy significantly.

In the next article, I will explain Gradient Descent and Xtreme Gradient Descent algorithm, which are a few more important Boosting techniques to enhance the prediction power.

If you want to know about the python implementation for beginners of the AdaBoost classifier machine learning model from scratch, then visit this complete guide from analytics vidhya. This article mentions the difference between bagging and ada boosting, as well as the advantages and disadvantages of the AdaBoost algorithm.

Key Takeaways:

  • In this article, we understood how ada boosting works.
  • We understood the maths behind adaboost.
  • We learned how weak learners are used as estimators to increase accuracy.
Q1. Is the AdaBoost algorithm supervised or unsupervised?

A. Adaboost falls under the supervised learning branch of machine learning. This means that the training data must have a target variable. Using the adaboost learning technique, we can solve both classification and regression problems.

Q2. What are the advantages of the AdaBoost algorithm?

A. Lesser preprocessing is required, as you do not need to scale the independent variables. Each iteration in the AdaBoost algorithm uses decision stumps as individual models, so the preprocessing required is the same as decision trees. AdaBoost is less prone to overfitting as well. In addition to ada boost weak learners, we can also fine-tune hyperparameters(learning_rate, for example) in these ensemble techniques to get even better accuracy.

Q3. How do you use the AdaBoost algorithm?

A. Much like random forests, decision trees, logistic regression, and svm classifiers, AdaBoost also requires the training data to have a target variable. This target variable could be either categorical or continuous. The scikit-learn library contains the Adaboost classifiers and regressors; hence we can use sklearn in python to create an adaboost model.

Q4.What is the difference between AdaBoost and boosting?

Boosting: Makes a strong learner from many weak ones. Focuses on improving past mistakes with each learner.

AdaBoost: A type of boosting. Focuses on hard-to-learn examples by giving them more weight during training

.

The media shown in this article on Interactive Dashboard using Bokeh are not owned by Analytics Vidhya and are used at the Author’s discretion.

Anshul 03 Oct, 2024

I have recently graduated with a Bachelor's degree in Statistics and am passionate about pursuing a career in the field of data science, machine learning, and artificial intelligence. Throughout my academic journey, I thoroughly enjoyed exploring data to uncover valuable insights and trends. I am eager to continue learning and expanding my knowledge in the field of data science. I am particularly interested in exploring deep learning and natural language processing, and I am constantly seeking out new challenges to improve my skills. My ultimate goal is to use my expertise to help businesses and organizations make data-driven decisions and drive growth and success.

Responses From Readers

Clear

ganesh
ganesh 17 Dec, 2021

really it was prety much good explanation with appropriate example

Rajesh
Rajesh 11 Jan, 2022

Appreciated efforts. Its very easy to understand

Siya
Siya 16 May, 2022

Great n helpful articles. Easy language n understandable.

yash
yash 11 Sep, 2022

why we randomly select number in step 6 and not directly the incorrect classified?

Prosenjit Banerjee
Prosenjit Banerjee 15 Sep, 2022

What will happen if the total error is 0? How do we calculate performance of stump?

Sourav
Sourav 03 Oct, 2022

Nice article. Thank you

HASHIR BIN ABDUL AZEEZ
HASHIR BIN ABDUL AZEEZ 14 Oct, 2022

we're selecting new dataset by bootstrapping technique.

rtz
rtz 30 Dec, 2022

Is this section in this correct in this article: “ The amount of say (alpha) will be negative when the sample is correctly classified. The amount of say (alpha) will be positive when the sample is miss-classified.” Should not be it reversed ?

Bharath Vamsi
Bharath Vamsi 12 Jun, 2023

Thanks Anshul Saini that is a fantastice explanation

Hung
Hung 21 Aug, 2023

When the total error is 0, the model classify all data successfully. So I think the algorithm can be stopped.

TV schedule
TV schedule 24 Sep, 2023

I found this blog post very helpful in understanding the AdaBoost algorithm. I found the implementation to be straightforward and I am now able to master the algorithm.

online translate
online translate 10 Oct, 2023

I found this blog post very helpful in understanding the AdaBoost algorithm. I found the implementation to be straightforward and I am now able to master the algorithm.

vishnu
vishnu 17 Nov, 2023

because we should also take correctly classified instances into consideration along with misclassified instances which is not so convenient if we directly take the misclassified instances only hope it helps........

Sridhar Nomula
Sridhar Nomula 21 Apr, 2024

I like the flow of information but i am not sure if the weight update formula is correct. I see different formula in the original paper. Please clarify if i am missing something. Thanks