Learn everything about Analytics

6 Easy Steps to Learn Naive Bayes Algorithm (with codes in Python and R)

SHARE
, / 36

Note: This article was originally published on Sep 13th, 2015 and updated on Sept 11th, 2017

Introduction

Here’s a situation you’ve got into:

You are working on a classification problem and you have generated your set of hypothesis, created features and discussed the importance of variables. Within an hour, stakeholders want to see the first cut of the model.

What will you do? You have hunderds of thousands of data points and quite a few variables in your training data set. In such situation, if I were at your place, I would have used ‘Naive Bayes‘, which can be extremely fast relative to other classification algorithms. It works on Bayes theorem of probability to predict the class of unknown data set.

In this article, I’ll explain the basics of this algorithm, so that next time when you come across large data sets, you can bring this algorithm to action. In addition, if you are a newbie in Python or R, you should be overwhelmed by the presence of available codes in this article.

Table of Contents

  1. What is Naive Bayes algorithm?
  2. How Naive Bayes Algorithms works?
  3. What are the Pros and Cons of using Naive Bayes?
  4. 4 Applications of Naive Bayes Algorithm
  5. Steps to build a basic Naive Bayes Model in Python
  6. Tips to improve the power of Naive Bayes Model

 

What is Naive Bayes algorithm?

It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple and that is why it is known as ‘Naive’.

Naive Bayes model is easy to build and particularly useful for very large data sets. Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods.

Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|c). Look at the equation below:

Bayes_rule-300x172Above,

  • P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).
  • P(c) is the prior probability of class.
  • P(x|c) is the likelihood which is the probability of predictor given class.
  • P(x) is the prior probability of predictor.

 

How Naive Bayes algorithm works?

Let’s understand it using an example. Below I have a training data set of weather and corresponding target variable ‘Play’ (suggesting possibilities of playing). Now, we need to classify whether players will play or not based on weather condition. Let’s follow the below steps to perform it.

Step 1: Convert the data set into a frequency table

Step 2: Create Likelihood table by finding the probabilities like Overcast probability = 0.29 and probability of playing is 0.64.

Bayes_4

Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.

Problem: Players will play if weather is sunny. Is this statement is correct?

We can solve it using above discussed method of posterior probability.

P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) / P (Sunny)

Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64

Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.

Naive Bayes uses a similar method to predict the probability of different class based on various attributes. This algorithm is mostly used in text classification and with problems having multiple classes.

 

What are the Pros and Cons of Naive Bayes?

Pros:

  • It is easy and fast to predict class of test data set. It also perform well in multi class prediction
  • When assumption of independence holds, a Naive Bayes classifier performs better compare to other models like logistic regression and you need less training data.
  • It perform well in case of categorical input variables compared to numerical variable(s). For numerical variable, normal distribution is assumed (bell curve, which is a strong assumption).

Cons:

  • If categorical variable has a category (in test data set), which was not observed in training data set, then model will assign a 0 (zero) probability and will be unable to make a prediction. This is often known as “Zero Frequency”. To solve this, we can use the smoothing technique. One of the simplest smoothing techniques is called Laplace estimation.
  • On the other side naive Bayes is also known as a bad estimator, so the probability outputs from predict_proba are not to be taken too seriously.
  • Another limitation of Naive Bayes is the assumption of independent predictors. In real life, it is almost impossible that we get a set of predictors which are completely independent.

 

4 Applications of Naive Bayes Algorithms

  • Real time Prediction: Naive Bayes is an eager learning classifier and it is sure fast. Thus, it could be used for making predictions in real time.
  • Multi class Prediction: This algorithm is also well known for multi class prediction feature. Here we can predict the probability of multiple classes of target variable.
  • Text classification/ Spam Filtering/ Sentiment Analysis: Naive Bayes classifiers mostly used in text classification (due to better result in multi class problems and independence rule) have higher success rate as compared to other algorithms. As a result, it is widely used in Spam filtering (identify spam e-mail) and Sentiment Analysis (in social media analysis, to identify positive and negative customer sentiments)
  • Recommendation System: Naive Bayes Classifier and Collaborative Filtering together builds a Recommendation System that uses machine learning and data mining techniques to filter unseen information and predict whether a user would like a given resource or not

 

How to build a basic model using Naive Bayes in Python?

Again, scikit learn (python library) will help here to build a Naive Bayes model in Python. There are three types of Naive Bayes model under scikit learn library:

  • Gaussian: It is used in classification and it assumes that features follow a normal distribution.

  • MultinomialIt is used for discrete counts. For example, let’s say,  we have a text classification problem. Here we can consider bernoulli trials which is one step further and instead of “word occurring in the document”, we have “count how often word occurs in the document”, you can think of it as “number of times outcome number x_i is observed over the n trials”.

  • BernoulliThe binomial model is useful if your feature vectors are binary (i.e. zeros and ones). One application would be text classification with ‘bag of words’ model where the 1s & 0s are “word occurs in the document” and “word does not occur in the document” respectively.

Based on your data set, you can choose any of above discussed model. Below is the example of Gaussian model.

Python Code

#Import Library of Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
import numpy as np

#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()

# Train the model using the training sets 
model.fit(x, y)

#Predict Output 
predicted= model.predict([[1,2],[3,4]])
print predicted

Output: ([3,4])

R Code:

require(e1071) #Holds the Naive Bayes Classifier
Train <- read.csv(file.choose())
Test <- read.csv(file.choose())

#Make sure the target variable is of a two-class classification problem only

levels(Train$Item_Fat_Content)

model <- naiveBayes(Item_Fat_Content~., data = Train)
class(model) 
pred <- predict(model,Test)
table(pred)

Above, we looked at the basic Naive Bayes model, you can improve the power of this basic model by tuning parameters and handle assumption intelligently. Let’s look at the methods to improve the performance of Naive Bayes Model. I’d recommend you to go through this document for more details on Text classification using Naive Bayes.

 

Tips to improve the power of Naive Bayes Model

Here are some tips for improving power of Naive Bayes Model:

  • If continuous features do not have normal distribution, we should use transformation or different methods to convert it in normal distribution.
  • If test data set has zero frequency issue, apply smoothing techniques “Laplace Correction” to predict the class of test data set.
  • Remove correlated features, as the highly correlated features are voted twice in the model and it can lead to over inflating importance.
  • Naive Bayes classifiers has limited options for parameter tuning like alpha=1 for smoothing, fit_prior=[True|False] to learn class prior probabilities or not and some other options (look at detail here). I would recommend to focus on your  pre-processing of data and the feature selection.
  • You might think to apply some classifier combination technique like ensembling, bagging and boosting but these methods would not help. Actually, “ensembling, boosting, bagging” won’t help since their purpose is to reduce variance. Naive Bayes has no variance to minimize.

 

End Notes

In this article, we looked at one of the supervised machine learning algorithm “Naive Bayes” mainly used for classification. Congrats, if you’ve thoroughly & understood this article, you’ve already taken you first step to master this algorithm. From here, all you need is practice.

Further, I would suggest you to focus more on data pre-processing and feature selection prior to applying Naive Bayes algorithm.0 In future post, I will discuss about text and document classification using naive bayes in more detail.

Did you find this article helpful? Please share your opinions / thoughts in the comments section below.

Learn, engage, compete, and get hired!

36 Comments

  • Arun CR says:

    Hi Sunil ,

    From the weather and play table which is table [1] we know that
    frequency of sunny is 5
    and play when sunny is 3
    no play when suny is 2
    so probability(play/sunny) is 3/5 = 0.6
    Why do we need conditional probabilty to solve this?

    Is there problems that can be solved only using conditional probability. can you suggest such examples.

    Thanks,
    Arun

    • matt says:

      Arun,

      An example of a problem that requires the use of conditional probability is the Monty Hall problem. https://en.wikipedia.org/wiki/Monty_Hall_problem

      Conditional probability is used to solve this particular problem because the solution depends on Bayes’ Theorem. Which was described earlier in the article.

    • Erdem Karakoylu says:

      It’s a trivial example for illustration. The “Likelihood table” (a confusing misnomer, I think) is in fact a probability table that has the JOINT weather and play outcome probabilities in the center, and the MARGINAL probabilities of one variable (from integrating out the other variable from the joint) on the side and bottom.

      Say, weather type = w and play outcome = p.
      P(w,p) is the joint probabilities and P(p) and P(w) are the marginals. Bayes rule described above by Sunil stems from:
      P(w,p) = P(w|p) * P(p) = P(p|w) * P(w).
      From the center cells we have P(w,p) and from the side/bottom we get P(p) and P(w).
      Depending on what you need to calculate, it follows that:
      (1): P(w|p) = P(w,p) / P(p) and
      (2:)P(p|w) = P(w,p) / P(w), which is what you did with P(sunny,yes) = 3/14 and P(w) = 5/14, yielding (3/14 ) ( 14/5), with the 14’s cancelling out.

      The main Bayes take away is that often, one of the two quantities above, P(w|p) or P(p|w) is much harder to get at than the other. So if you’re a practitioner you’ll come to see this as one of two mathematical miracles regarding this topic, the other being the applicability of Markov Chain Monte Carlo in circumventing some nasty integrals Bayes might throw at you. But I digress.

      Best,
      Erdem.

    • John Siano says:

      I had the same question. Have you found an answer? Thanks!

  • Arun CR says:

    Great article and provides nice information.

  • Nishi Singh says:

    Amazing content and useful information

  • Leena says:

    I’m new to machine learning and Python.Could you please help to read data from CSV and to separate the same data set to training and test data

  • Sushma honnidige says:

    very useful article.

  • RAJKUMAR says:

    Very nice….but…if u dont mind…can you please give me that code in JAVA

  • jitesh says:

    is it possible to classify new tuple in orange data mining tool??

  • SPGupta says:

    good.

  • I am really impressed together with your writing skills as wwell as with the format to your weblog.
    Is that this a paid theme or did you customiz it yourself?
    Anyway stay up the excellent quality writing, it’s uncommon tto see a great weblog like this one these days..

  • You should be a part of a contest for one of the most useful websites online.
    I will highly recommend this blog!

  • TingTing says:

    Thanks for tips to improve the performance of models, that ‘s really precious experience.

  • Leanna Partridge says:

    Nice piece – Just to add my thoughts , people require a CA OCF-1 , We used a sample document here http://goo.gl/ibPgs2

  • Miguel Batista says:

    Hi,
    I have a question regarding this statement:
    ‘If continuous features do not have normal distribution, we should use transformation or different methods to convert it in normal distribution.’

    Can you provide an example or a link to the techniques?
    Thank you,
    MB

  • bd tv says:

    Can ӏ simply juѕt say what а comfoгt too find someone
    who actսaly knows what they’re talking aƄout over the
    internet. Υou certainly know how to bring an issue to light and make it impߋrtant.

    A lot more peoⲣⅼe ought to read this and undᥱrstand this side of your story.
    I can’t believe you aren’t more popular because you certainly have the gift.

  • nir says:

    Great article! Thanks. Are there any similar articles for other classification algorithms specially target towards textual features and mix of textual/numeric features?

  • Nick says:

    great article with basic clarity…..nice one

  • Catherine says:

    This article is extremely clear and well laid-out. Thank you!

  • pangavhane nitin says:

    ty

  • alfiya says:

    Explanation given in simple word. Well explained! Loved this article.

  • Chris Rucker says:

    The ‘y’ should be capitalized in your code – great article though.

  • Akash Swamy says:

    This is the best explanation of NB so far simple and short 🙂

  • John says:

    Great article! Really enjoyed it. Just wanted to point out a small error in the Python code.

    Should be a capital “Y” in the predict like so : model.fit(x, Y)

    Thanks!

  • Adnan says:

    Is this dataset related to weather? I am confused as a newbie. Can you please guide?

  • bh says:

    best artical that help me to understand this concept

  • Richard says:

    Am new to machine learning and this article was handy to me in understanding naive bayes especially the data on weather and play in the table.

    Thanks for sharing keep up

  • Lisa says:

    Thanks to you I can totally understand NB classifier.

  • T B says:

    Really nice article, very use-full for concept building.

  • AKshay says:

    I didn’t understand the 3rd step. Highest probability out of which probability values?

    >> Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.
    Higher than what?

  • DN says:

    Concept explained well… nice Article

  • Rajeshwari says:

    thanks nice artical that help me to understand this concept

  • amit Kumar yadav says:

    Good article and I am waiting for text and documents classification using naive base algorithm.

  • Stella says:

    Superb information in just one blog.Enjoyed the simplicity.Thanks for the effort.

Leave A Reply

Your email address will not be published.