This article was published as a part of theÂ Data Science Blogathon

Assume you wish to categorize user reviews as good or bad. Sentiment Analysis is a popular job to be performed by data scientists. This is a simple guide using Naive Bayes Classifier and Scikit-learn to create a Google Play store reviews classifier (Sentiment Analysis) in Python.

Naive Bayes is the simplest and fastest classification algorithm for a large chunk of data. In various applications such as spam filtering, text classification, sentiment analysis, and recommendation systems, Naive Bayes classifier is used successfully. It uses the Bayes probability theorem for unknown class prediction.

The Naive Bayes classification technique is a simple and powerful classification task in machine learning. The use of Bayes’ theorem with a strong independence assumption between the features is the basis for naive Bayes classification. When used for textual data analysis, such as Natural Language Processing, the Naive Bayes classification yields good results.

Simple Bayes or independent Bayes models are other names for nave Bayes models. All of these terms refer to the classifier’s decision rule using Bayes’ theorem. In practice, the Bayes theorem is applied by the Naive Bayes classifier. The power of Bayes’ theorem is brought to machine learning with this classifier.

The Bayes theorem is used by the Naive Bayes Classifier to forecast membership probabilities for each class, such as the likelihood that a given record or data point belongs to that class. The most likely class is defined as the one having the highest probability. The Maximum A Posteriori is another name for this (MAP).

For a hypothesis with two occurrences A and B, the MAP is

**MAP (A)**

= max (P (A | B))

= max (P (B | A) * P (A))/P (B)

= max (P (B | A) * P (A)

P (B) stands for probability of evidence. It’s utilized to make the outcome more normal. It has no effect on the outcome if it is removed.

All of the features in the Naive Bayes Classifier are assumed to be unrelated. A feature’s presence or absence has no bearing on the presence or absence of other features.

We test a hypothesis given different evidence on features in real-world datasets. As a result, the computations become fairly difficult. To make things easier, the feature independence technique is utilized to decouple various pieces of evidence and consider them as separate entities.

There are 3 types of NaÃ¯ve Bayes algorithm. The 3 types are listed below:-

- Gaussian NaÃ¯ve Bayes
- Multinomial NaÃ¯ve Bayes
- Bernoulli NaÃ¯ve Bayes

Naive Bayes is one of the most straightforward and fast classification algorithms. It is very well suited for large volumes of data. It is successfully used in various applications such as :

- Spam filtering
- Text classification
- Sentiment analysis
- Recommender systems

It uses the Bayes theorem of probability for the prediction of unknown classes.

In this dataset, we use the 23 most popular mobile apps. In order to create the dataset, the data was compiled manually labelling each data as positive or negative and can be found here: Reviews DataSet.

In this tutorial, we need all of the following python libraries.

**pandas –**Â Python Data Analysis Library. pandas are open-source, BSD-licensed libraries for the Python programming language that provide high-performance, simple-to-use data structures, and data analysis tools.

**Numpy –**Â NumPy is a scientific computing fundamental package in Python. It contains among other things:

- a powerful N-dimensional array object
- sophisticated (broadcasting) functions
- tools for integrating C/C++ and Fortran code
- capabilities in linear algebra, Fourier transform, and random numbers

NumPy can be used as a multi-dimensional container of generic data in addition to its apparent scientific applications. It is possible to define any number of data kinds. This enables NumPy to work with a wide range of databases with ease and speed.

**sci-kit learn –**Â Data mining and data analysis tools that are easy to use.

**python_dateutil –**Â The date util module extends Python’s conventional DateTime module with a number of useful features.

**Pytz –**Â is a Python package that integrates the Olson database. With Python 2.4 or above and this module, you can calculate time zones accurately and cross-platform.

Letâ€™s first read the required data from a CSV file using the Pandas library.

**Python Code:**

We need to remove the package name as itâ€™s not relevant. Then convert text to lowercase for CSV data. So, this is the data pre-process stage.

**Note: **There are many different and more sophisticated ways in which text data can be cleaned that would likely produce better results than what I did here. To be as easy as possible in this tutorial. I also generally think itâ€™s best to get baseline predictions with the simplest solution possible before spending time doing unnecessary transformations.

First, separate the columns into dependent and independent variables (or features and labels). Then you split those variables into train and test sets.

Â

# Split into training and testing data x = data['review'] y = data['polarity'] x, x_test, y, y_test = train_test_split(x,y, stratify=y, test_size=0.25, random_state=42)

Vectorize text reviews to numbers.

# Vectorize text reviews to numbers vec = CountVectorizer(stop_words='english') x = vec.fit_transform(x).toarray() x_test = vec.transform(x_test).toarray()

**Vectorization:** To make sense of this data for our machine learning algorithm, we will need to convert each review to a numerical representation that we call *vectorization.*

After splitting and vectorize text reviews into numbers, we will generate a random forest model on the training set and perform prediction on test set features.

from sklearn.naive_bayes import MultinomialNB model = MultinomialNB() model.fit(x, y)

Check the correctness of the model after it has been created by comparing real and anticipated values. This model is 85 % accurate.

model.score(x_test, y_test)

Then check prediction.

model.predict(vec.transform(['Love this app simply awesome!']))

And there it is. A very simple classifier with 85% pretty decent accuracy out of the box.

*Thank you for reading!
*I hope you enjoyed the article and increased your knowledge.

Please feel free to contact me

**Hardikkumar M. Dhaduk**

Data Analyst | Digital Data Analysis Specialist | Data Science Learner

Connect with me on **Linkedin**

Connect with me on **Github**

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Become a full stack data scientist##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

Understanding Cost Function
Understanding Gradient Descent
Math Behind Gradient Descent
Assumptions of Linear Regression
Implement Linear Regression from Scratch
Train Linear Regression in Python
Implementing Linear Regression in R
Diagnosing Residual Plots in Linear Regression Models
Generalized Linear Models
Introduction to Logistic Regression
Odds Ratio
Implementing Logistic Regression from Scratch
Introduction to Scikit-learn in Python
Train Logistic Regression in python
Multiclass using Logistic Regression
How to use Multinomial and Ordinal Logistic Regression in R ?
Challenges with Linear Regression
Introduction to Regularisation
Implementing Regularisation
Ridge Regression
Lasso Regression

Introduction to Stacking
Implementing Stacking
Variants of Stacking
Implementing Variants of Stacking
Introduction to Blending
Bootstrap Sampling
Introduction to Random Sampling
Hyper-parameters of Random Forest
Implementing Random Forest
Out-of-Bag (OOB) Score in the Random Forest
IPL Team Win Prediction Project Using Machine Learning
Introduction to Boosting
Gradient Boosting Algorithm
Math behind GBM
Implementing GBM in python
Regularized Greedy Forests
Extreme Gradient Boosting
Implementing XGBM in python
Tuning Hyperparameters of XGBoost in Python
Implement XGBM in R/H2O
Adaptive Boosting
Implementing Adaptive Boosing
LightGBM
Implementing LightGBM in Python
Catboost
Implementing Catboost in Python

Introduction to Clustering
Applications of Clustering
Evaluation Metrics for Clustering
Understanding K-Means
Implementation of K-Means in Python
Implementation of K-Means in R
Choosing Right Value for K
Profiling Market Segments using K-Means Clustering
Hierarchical Clustering
Implementation of Hierarchial Clustering
DBSCAN
Defining Similarity between clusters
Build Better and Accurate Clusters with Gaussian Mixture Models

Introduction to Machine Learning Interpretability
Framework and Interpretable Models
model Agnostic Methods for Interpretability
Implementing Interpretable Model
Understanding SHAP
Out-of-Core ML
Introduction to Interpretable Machine Learning Models
Model Agnostic Methods for Interpretability
Game Theory & Shapley Values

Deploying Machine Learning Model using Streamlit
Deploying ML Models in Docker
Deploy Using Streamlit
Deploy on Heroku
Deploy Using Netlify
Introduction to Amazon Sagemaker
Setting up Amazon SageMaker
Using SageMaker Endpoint to Generate Inference
Deploy on Microsoft Azure Cloud
Introduction to Flask for Model
Deploying ML model using Flask

Hi, I found your result very interesting and helped me with a project I was working on, by any chance do you know how you coul implement another algorithm to this so that you can compare your accuracy, for example a linear regression model?

Hello, I used your code, but how can I get accuracy_score? please help me (: