A Twitter sentiment analysis determines negative, positive, or neutral emotions within the text of a tweet using NLP and ML models. Sentiment analysis or opinion mining refers to identifying as well as classifying the sentiments that are expressed in the text source. Tweets are often useful in generating a vast amount of sentiment data upon analysis. These data are useful in understanding the opinion of people on social media for a variety of topics.
In this article, you will learn how to perform Twitter sentiment analysis using Python. We’ll explore a Twitter sentiment analysis project, analyze tweet sentiment, and use a Twitter sentiment analysis dataset for accurate sentiment analysis on Twitter.
This article was published as a part of the Data Science Blogathon.
Twitter sentiment analysis analyzes the sentiment or emotion of tweets. It uses natural language processing and machine learning algorithms to classify tweets automatically as positive, negative, or neutral based on their content. It can be done for individual tweets or a larger dataset related to a particular topic or event.
In this article, we aim to analyze Twitter sentiment analysis Dataset using machine learning algorithms, the sentiment of tweets provided from the Sentiment140 dataset by developing a machine learning pipeline involving the use of three classifiers (Logistic Regression, Bernoulli Naive Bayes, and SVM)along with using Term Frequency- Inverse Document Frequency (TF-IDF). The performance of these classifiers is then evaluated using accuracy and F1 Scores.
For data preprocessing, we will be using Natural Language Processing’s (NLP) NLTK library.
In this project, we try to implement an NLP Twitter sentiment analysis model that helps to overcome the challenges of sentiment classification of tweets. We will be classifying the tweets into positive or negative sentiments. The necessary details regarding the dataset involving the Twitter sentiment analysis project are:
The dataset provided is the Sentiment140 Dataset which consists of 1,600,000 tweets that have been extracted using the Twitter API. The various columns present in this Twitter data are:
The various steps involved in the Machine Learning Pipeline are:
Let’s get started,
# utilities
import re
import numpy as np
import pandas as pd
# plotting
import seaborn as sns
from wordcloud import WordCloud
import matplotlib.pyplot as plt
# nltk
from nltk.stem import WordNetLemmatizer
# sklearn
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import BernoulliNB
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import confusion_matrix, classification_report
# Importing the dataset
DATASET_COLUMNS=['target','ids','date','flag','user','text']
DATASET_ENCODING = "ISO-8859-1"
df = pd.read_csv('Project_Data.csv', encoding=DATASET_ENCODING, names=DATASET_COLUMNS)
df.sample(5)
Output:
3.1: Five top records of data
df.head()
Output:
3.2: Columns/features in data
df.columns
Output:
Index(['target', 'ids', 'date', 'flag', 'user', 'text'], dtype='object')
3.3: Length of the dataset
print('length of data is', len(df))
Output:
length of data is 1048576
3.4: Shape of data
df. shape
Output:
(1048576, 6)
3.5: Data information
df.info()
Output:
3.6: Datatypes of all columns
df.dtypes
Output:
target int64
ids int64
date object
flag object
user object
text object
dtype: object
3.7: Checking for null values
np.sum(df.isnull().any(axis=1))
Output:
0
3.8: Rows and columns in the dataset
print('Count of columns in the data is: ', len(df.columns))
print('Count of rows in the data is: ', len(df))
Output:
Count of columns in the data is: 6
Count of rows in the data is: 1048576
3.9: Check unique target values
df['target'].unique()
Output:
array([0, 4], dtype=int64)
3.10: Check the number of target values
df['target'].nunique()
Output:
2
# Plotting the distribution for dataset.
ax = df.groupby('target').count().plot(kind='bar', title='Distribution of data',legend=False)
ax.set_xticklabels(['Negative','Positive'], rotation=0)
# Storing data in lists.
text, sentiment = list(df['text']), list(df['target'])
Output:
import seaborn as sns
sns.countplot(x='target', data=df)
Output:
In the above-given problem statement, before training the model, we performed various pre-processing steps on the dataset that mainly dealt with removing stopwords, removing special characters like emojis, hashtags, etc. The text document is then converted into lowercase for better generalization.
Subsequently, the punctuations were cleaned and removed, thereby reducing the unnecessary noise from the dataset. After that, we also removed the repeating characters from the words along with removing the URLs as they do not have any significant importance.
At last, we then performed Stemming(reducing the words to their derived stems) and Lemmatization(reducing the derived words to their root form, known as lemma) for better results.
5.1: Selecting the text and Target column for our further analysis
data=df[['text','target']]
5.2: Replacing the values to ease understanding. (Assigning 1 to Positive sentiment 4)
data['target'] = data['target'].replace(4,1)
5.3: Printing unique values of target variables
data['target'].unique()
Output:
array([0, 1], dtype=int64)
5.4: Separating positive and negative tweets
data_pos = data[data['target'] == 1]
data_neg = data[data['target'] == 0]
5.5: Taking one-fourth of the data so we can run it on our machine easily
data_pos = data_pos.iloc[:int(20000)]
data_neg = data_neg.iloc[:int(20000)]
5.6: Combining positive and negative tweets
dataset = pd.concat([data_pos, data_neg])
5.7: Making statement text in lowercase
dataset['text']=dataset['text'].str.lower()
dataset['text'].tail()
Output:
5.8: Defining set containing all stopwords in English.
stopwordlist = ['a', 'about', 'above', 'after', 'again', 'ain', 'all', 'am', 'an',
'and','any','are', 'as', 'at', 'be', 'because', 'been', 'before',
'being', 'below', 'between','both', 'by', 'can', 'd', 'did', 'do',
'does', 'doing', 'down', 'during', 'each','few', 'for', 'from',
'further', 'had', 'has', 'have', 'having', 'he', 'her', 'here',
'hers', 'herself', 'him', 'himself', 'his', 'how', 'i', 'if', 'in',
'into','is', 'it', 'its', 'itself', 'just', 'll', 'm', 'ma',
'me', 'more', 'most','my', 'myself', 'now', 'o', 'of', 'on', 'once',
'only', 'or', 'other', 'our', 'ours','ourselves', 'out', 'own', 're','s', 'same', 'she', "shes", 'should', "shouldve",'so', 'some', 'such',
't', 'than', 'that', "thatll", 'the', 'their', 'theirs', 'them',
'themselves', 'then', 'there', 'these', 'they', 'this', 'those',
'through', 'to', 'too','under', 'until', 'up', 've', 'very', 'was',
'we', 'were', 'what', 'when', 'where','which','while', 'who', 'whom',
'why', 'will', 'with', 'won', 'y', 'you', "youd","youll", "youre",
"youve", 'your', 'yours', 'yourself', 'yourselves']
5.9: Cleaning and removing the above stop words list from the tweet text
STOPWORDS = set(stopwordlist)
def cleaning_stopwords(text):
return " ".join([word for word in str(text).split() if word not in STOPWORDS])
dataset['text'] = dataset['text'].apply(lambda text: cleaning_stopwords(text))
dataset['text'].head()
Output:
5.10: Cleaning and removing punctuations
import string
english_punctuations = string.punctuation
punctuations_list = english_punctuations
def cleaning_punctuations(text):
translator = str.maketrans('', '', punctuations_list)
return text.translate(translator)
dataset['text']= dataset['text'].apply(lambda x: cleaning_punctuations(x))
dataset['text'].tail()
Output:
5.11: Cleaning and removing repeating characters
def cleaning_repeating_char(text):
return re.sub(r'(.)1+', r'1', text)
dataset['text'] = dataset['text'].apply(lambda x: cleaning_repeating_char(x))
dataset['text'].tail()
Output:
5.12: Cleaning and removing URLs
def cleaning_URLs(data):
return re.sub('((www.[^s]+)|(https?://[^s]+))',' ',data)
dataset['text'] = dataset['text'].apply(lambda x: cleaning_URLs(x))
dataset['text'].tail()
Output:
5.13: Cleaning and removing numeric numbers
def cleaning_numbers(data):
return re.sub('[0-9]+', '', data)
dataset['text'] = dataset['text'].apply(lambda x: cleaning_numbers(x))
dataset['text'].tail()
Output:
5.14: Getting tokenization of tweet text
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'w+')
dataset['text'] = dataset['text'].apply(tokenizer.tokenize)
dataset['text'].head()
Output:
5.15: Applying stemming
import nltk
st = nltk.PorterStemmer()
def stemming_on_text(data):
text = [st.stem(word) for word in data]
return data
dataset['text']= dataset['text'].apply(lambda x: stemming_on_text(x))
dataset['text'].head()
Output:
5.16: Applying lemmatizer
lm = nltk.WordNetLemmatizer()
def lemmatizer_on_text(data):
text = [lm.lemmatize(word) for word in data]
return data
dataset['text'] = dataset['text'].apply(lambda x: lemmatizer_on_text(x))
dataset['text'].head()
Output:
5.17: Separating input feature and label
X=data.text
y=data.target
5.18: Plot a cloud of words for negative tweets
data_neg = data['text'][:800000]
plt.figure(figsize = (20,20))
wc = WordCloud(max_words = 1000 , width = 1600 , height = 800,
collocations=False).generate(" ".join(data_neg))
plt.imshow(wc)
Output:
5.19: Plot a cloud of words for positive tweets
data_pos = data['text'][800000:]
wc = WordCloud(max_words = 1000 , width = 1600 , height = 800,
collocations=False).generate(" ".join(data_pos))
plt.figure(figsize = (20,20))
plt.imshow(wc)
Output:
# Separating the 95% data for training data and 5% for testing data
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.05, random_state =26105111)
7.1: Fit the TF-IDF Vectorizer
vectoriser = TfidfVectorizer(ngram_range=(1,2), max_features=500000)
vectoriser.fit(X_train)
print('No. of feature_words: ', len(vectoriser.get_feature_names()))
Output:
No. of feature_words: 500000
7.2: Transform the data using TF-IDF Vectorizer
X_train = vectoriser.transform(X_train)
X_test = vectoriser.transform(X_test)
After training the model, we then apply the evaluation measures to check how the model is performing. Accordingly, we use the following evaluation parameters to check the performance of the models respectively:
def model_Evaluate(model):
# Predict values for Test dataset
y_pred = model.predict(X_test)
# Print the evaluation metrics for the dataset.
print(classification_report(y_test, y_pred))
# Compute and plot the Confusion matrix
cf_matrix = confusion_matrix(y_test, y_pred)
categories = ['Negative','Positive']
group_names = ['True Neg','False Pos', 'False Neg','True Pos']
group_percentages = ['{0:.2%}'.format(value) for value in cf_matrix.flatten() / np.sum(cf_matrix)]
labels = [f'{v1}n{v2}' for v1, v2 in zip(group_names,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix, annot = labels, cmap = 'Blues',fmt = '',
xticklabels = categories, yticklabels = categories)
plt.xlabel("Predicted values", fontdict = {'size':14}, labelpad = 10)
plt.ylabel("Actual values" , fontdict = {'size':14}, labelpad = 10)
plt.title ("Confusion Matrix", fontdict = {'size':18}, pad = 20)
In the problem statement, we have used three different models respectively :
The idea behind choosing these models is that we want to try all the classifiers on the dataset ranging from simple ones to complex models, and then try to find out the one which gives the best performance among them.
8.1: Model-1
BNBmodel = BernoulliNB()
BNBmodel.fit(X_train, y_train)
model_Evaluate(BNBmodel)
y_pred1 = BNBmodel.predict(X_test)
Output:
8.2: Plot the ROC-AUC Curve for model-1
from sklearn.metrics import roc_curve, auc
fpr, tpr, thresholds = roc_curve(y_test, y_pred1)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=1, label='ROC curve (area = %0.2f)' % roc_auc)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC CURVE')
plt.legend(loc="lower right")
plt.show()
Output:
8.3: Model-2:
SVCmodel = LinearSVC()
SVCmodel.fit(X_train, y_train)
model_Evaluate(SVCmodel)
y_pred2 = SVCmodel.predict(X_test)
Output:
8.4: Plot the ROC-AUC Curve for model-2
from sklearn.metrics import roc_curve, auc
fpr, tpr, thresholds = roc_curve(y_test, y_pred2)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=1, label='ROC curve (area = %0.2f)' % roc_auc)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC CURVE')
plt.legend(loc="lower right")
plt.show()
Output:
8.5: Model-3
LRmodel = LogisticRegression(C = 2, max_iter = 1000, n_jobs=-1)
LRmodel.fit(X_train, y_train)
model_Evaluate(LRmodel)
y_pred3 = LRmodel.predict(X_test)
Output:
8.6: Plot the ROC-AUC Curve for model-3
from sklearn.metrics import roc_curve, auc
fpr, tpr, thresholds = roc_curve(y_test, y_pred3)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=1, label='ROC curve (area = %0.2f)' % roc_auc)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC CURVE')
plt.legend(loc="lower right")
plt.show()
Output:
Upon evaluating all the models, we can conclude the following details i.e.
Accuracy: As far as the accuracy of the model is concerned, Logistic Regression performs better than SVM, which in turn performs better than Bernoulli Naive Bayes.
F1-score: The F1 Scores for class 0 and class 1 are :
(a) For class 0: Bernoulli Naive Bayes(accuracy = 0.90) < SVM (accuracy =0.91) < Logistic Regression (accuracy = 0.92)
(b) For class 1: Bernoulli Naive Bayes (accuracy = 0.66) < SVM (accuracy = 0.68) < Logistic Regression (accuracy = 0.69)
AUC Score: All three models have the same ROC-AUC score.
We, therefore, conclude that the Logistic Regression is the best model for the above-given dataset.
In our problem statement, Logistic Regression follows the principle of Occam’s Razor, which defines that for a particular problem statement, if the data has no assumption, then the simplest model works the best. Since our dataset does not have any assumptions and Logistic Regression is a simple model. Therefore, the concept holds true for the above-mentioned dataset.
We hope through this article, you got a basic of how twiiter Sentimental Analysis is used to understand public emotions behind people’s tweets. As you’ve read in this article, Twitter Sentimental Analysis dataset helps us preprocess the data (tweets) using different methods and feed it into ML models to give the best accuracy.
Hope you like the article! Twitter sentiment analysis is a powerful tool for understanding public opinion. A Twitter sentiment analysis project using Python helps analyze tweet sentiment. A quality Twitter sentiment analysis dataset enhances sentiment analysis on Twitter.
Key Takeaways
A. Sentimental Analysis models are used in various industries for different purposes. Some examples are:
1. Using these models, we can get people’s opinions on social media platforms or social networking sites regarding specific topics.
2. Companies use these models to know the success or failure of their product by analyzing the sentiment of the product reviews and feedback from the people.
3. Health industries use these models for the text analysis of patients’ feedback and improve their services based on that.
4. We can also find new marketing trends and customer preferences using these models.
A. Given below are the steps for implementing Sentiment Analysis of Twitter in Python:
1. Firstly we will gather the required Tweets from Twitter.
2. We will clean the data using different pre-processing techniques.
3. After cleaning the data, we will create the sentimental analysis model using different machine learning algorithms.
4. We will analyze our Twitter data using our sentiment analysis model on the basis of sentiment score, i.e., a positive, negative, or neutral tweet.
5. Eventually, we will visualize the output from our model.
A. Machine Learning algorithms like Naive Bayes, Logistic Regression, SVM, and deep learning algorithms like RNN can be used to create Twitter Sentiment Analysis.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.
I'd like to know more about how the ground-truth of the dataset was determined. Did humans read and annotate all 1.6 million tweets? If yes, how many people needed to agree on a tweet's sentiment before it was labeled? Just 1 person? Best 2 out of 3? 3 out of 5? What about tweets that were decidedly neutral (not positive AND not negative)? Such tweets do in fact exist. Were such tweets removed from the data? Or forced into a binary categorization? Would love any details or links you have about the underlying dataset that was used.
How to give one tweet to check that whether it is positive or negative??
really very informative , but while running code faceing some issue help me out to fix it
hello mam, above article is too good, helpful for me to understand end to end projects using NLP and machine learning both, thanks a lot, mam
mam when we have 1600000 data the why in the above article is showing only 1048576. please clearify
hello mam please upload the dataset
The data has imbalanced classes which have not been balanced as we can see the number of data with negative is 4 times more than positive. This is the reason that your model has an F1 score of about 65% for class type 1. You need to apply over-sampling and under-sampling to make them equal or collect more positive data.
How did you get the target variable? And the original dataset? Is it from twitter API?
TFidfVectorizer has an issue. Please submit a solution.
mam please can i have the source code file?
Hi , from where we can find dataset? can anybody share the link